Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Next: release candidate #1112

Merged
merged 421 commits into from
Sep 19, 2014
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
421 commits
Select commit Hold shift + click to select a range
c98ed3b
minor changes to variable names and error messages + set default back…
mohomran Aug 29, 2014
5615bf6
updated lenet_train_test.prototxt + minor correction to create_mnist.sh
mohomran Aug 31, 2014
aae8545
parse_log.sh adapted to new training log format + fixed typos and upd…
mohomran Aug 31, 2014
92f6c26
Merge pull request #1015 from mohomran/fixing_parse_log_script
shelhamer Aug 31, 2014
8bf1e60
Merge pull request #1008 from mohomran/mnist_with_lmdb
shelhamer Aug 31, 2014
c80c6f3
Merge pull request #1014 from longjon/cleaner-pycaffe
shelhamer Sep 1, 2014
fe95024
include comment on CPU mode fine-tuning for Flickr example
shelhamer Sep 1, 2014
f7458b5
make no GPU error in CPU-only mode a little clearer
shelhamer Sep 1, 2014
5f350aa
use LMDB in mnist autoencoder examples
jeffdonahue Sep 1, 2014
9d10569
Solver switching support & implementation of Nesterov's accelerated g…
qipeng Jul 19, 2014
8a9c268
restored vituals in solver.hpp
qipeng Jul 20, 2014
ed8b1da
converted pointers to shared_ptr
qipeng Jul 20, 2014
8b3dde0
fixed solver constructor in train_net.cpp
qipeng Jul 21, 2014
0144de6
improved numerical stability for AdaGrad
qipeng Jul 23, 2014
76ef2ca
bugfixes for AdaGrad
qipeng Jul 23, 2014
a683c40
Added L1 regularization support for the weights
qipeng Jul 24, 2014
b0ec531
fixed caffe.proto after a mistaken rebase
qipeng Jul 29, 2014
3f7a910
Addressed Yangqing's comments
qipeng Jul 30, 2014
23d4430
fixes after rebase
qipeng Jul 30, 2014
29b3b24
proto conflit, lint, and math_functions (compiler complaint)
qipeng Aug 20, 2014
7f2e66e
added unit test for solvers and fixed solver bugs
qipeng Aug 21, 2014
dbb9296
cleanup caffe.proto
jeffdonahue Aug 22, 2014
f206c64
Merge Test{SGD,AdaGrad,Nesterov}Solver; they become subclasses of
jeffdonahue Aug 22, 2014
c1ff97c
Added sanity check for AdaGradSolver; added MNIST examples for solvers
qipeng Aug 26, 2014
06f335f
lint
qipeng Aug 26, 2014
a464df4
Re-added solver switch into the new caffe main excutable; fixed AdaGr…
qipeng Aug 26, 2014
36f9de4
lint
qipeng Aug 26, 2014
9722649
hot fix for warning
qipeng Aug 26, 2014
5894f03
mnist_autoencoder: always compute both cross-entropy loss and L2
jeffdonahue Sep 1, 2014
b49b2d3
Add "test-on-train" stage to test accuracy on the training data; correct
jeffdonahue Sep 1, 2014
eaf28fe
make adagrad/nesterov train scripts follow new "run-from-root"
jeffdonahue Sep 1, 2014
b0f97fd
make MNIST autoencoder solvers start from base_lr 0.01 and step (much
jeffdonahue Sep 1, 2014
88e1797
Merge branch 'qipeng-solvers' into dev
jeffdonahue Sep 1, 2014
77d6661
revert tools/train_net.cpp to previous, depecated version
jeffdonahue Sep 1, 2014
396e4af
add CUDA 6.5 error CUBLAS_STATUS_LICENSE_ERROR to cublasGetErrorString
jeffdonahue Sep 1, 2014
ed13a61
[pycaffe] add converter for vector<string> used by _*_names
longjon Sep 2, 2014
afd8f37
[pycaffe] expose Net.blob_names and Net.layer_names
longjon Sep 2, 2014
3e12d49
[pycaffe] use _blob_names, _layer_names instead of removed .name
longjon Sep 2, 2014
bf19aaf
Merge pull request #1023 from longjon/unbreak-pycaffe
shelhamer Sep 2, 2014
d3c6eb6
fixed relative path and prefix for adagrad-optimised autoencoder snap…
mohomran Sep 2, 2014
cdcf888
Merge pull request #1025 from mohomran/minor_fix_to_mnist_solver_prot…
shelhamer Sep 2, 2014
fea9f0c
Create base data layer and base prefetching data layer
kloudkl Aug 28, 2014
ee65a97
Extract common data layer functionalities out of the DataLayer
kloudkl Aug 28, 2014
6833dc0
Remove duplicate codes from the ImageDataLayer
kloudkl Aug 28, 2014
b794cf9
Simplify the WindowDataLayer using the base class
kloudkl Aug 28, 2014
5af0d24
The BasePrefetchingDataLayer shouldn't join the thread
kloudkl Aug 28, 2014
4f7c9b4
Implement Forward_gpu in the base prefetching data layer
kloudkl Aug 28, 2014
4c35ad2
Add transformer to the memory data layer
kloudkl Aug 28, 2014
156a5a2
Remove pthread which has been replaced with boost thread
kloudkl Aug 29, 2014
3c9a13c
Move transform param one level up in the proto to reduce redundancy
kloudkl Aug 29, 2014
05ade81
Test adding images w/o resizing to the memory data layer
kloudkl Aug 29, 2014
20c5992
Move the rest duplicate codes of the data layers into their base class
kloudkl Aug 29, 2014
09cfe1c
Fix conflict between nvcc and boost for cmake
kloudkl Aug 30, 2014
74fa879
Add lint rule for caffe data layer setup
kloudkl Aug 31, 2014
725e98d
Remove OpenCV stuffs from the memory data layer and io utils
kloudkl Sep 2, 2014
ab1f9b5
Add leveldb header back to util/io.cpp
kloudkl Sep 2, 2014
25ce7f5
Place InternalThreadEntry lower in the {,Image,Window}DataLayer.cpp
kloudkl Sep 2, 2014
4761072
Add and transform Datum vector in the MemeoryDataLayer
kloudkl Sep 2, 2014
858ad41
Correct the datum size checking conditions of the data layers
kloudkl Sep 3, 2014
a08f111
Initialize the transformer rng in the base data layer
kloudkl Sep 3, 2014
5d8c93c
[docs] skeleton documentation subjects
shelhamer Aug 24, 2014
b256a76
[docs] draft tutorial subjects
shelhamer Aug 24, 2014
17252db
[docs] add note on Caffe convolution
shelhamer Sep 3, 2014
d15405a
use kramdown for markdown syntax; add mathjax
jeffdonahue Aug 28, 2014
59eaba1
[wip] vision layers, start convolution
jeffdonahue Aug 28, 2014
eebf2e2
add .Doxyfile: the default Doxygen config file from `doxygen -g`
jeffdonahue Aug 28, 2014
c34ed49
add "make {docs,doxygen}" targets to build doxygen-generated docs
jeffdonahue Aug 28, 2014
3223ca4
.gitignore doxygen-generated documentation
jeffdonahue Aug 28, 2014
ac9275b
.Doxyfile: modify to generate C++ docs, excluding tests
jeffdonahue Aug 28, 2014
afba4e6
.Doxyfile: don't warn if undocumented (maybe someday...)
jeffdonahue Aug 29, 2014
55eebfd
layer.hpp: Doxygen-style documentation
jeffdonahue Aug 28, 2014
9f58574
loss_layers.hpp: Doxygen-style documentation
jeffdonahue Aug 28, 2014
c5d5308
neuron_layers.hpp: Doxygen-style documentation
jeffdonahue Aug 28, 2014
57171b5
common_layers.hpp: Doxygen \brief & TODO stubs.
jeffdonahue Aug 30, 2014
c3151bb
data_layers: Doxygen \brief & TODO stubs.
jeffdonahue Aug 30, 2014
f2f73cf
vision_layers.hpp: Doxygen \brief & TODO stubs.
jeffdonahue Aug 30, 2014
81eb2eb
filler.hpp: add brief filler descriptions
jeffdonahue Aug 30, 2014
c84908a
blob.hpp: a little Doxygen-style documentation
jeffdonahue Aug 30, 2014
19cf385
syncedmem.hpp: \brief and todo
jeffdonahue Aug 30, 2014
4572663
solver.hpp: add \briefs
jeffdonahue Aug 30, 2014
9c31482
net.hpp: Doxygen-format docs
jeffdonahue Aug 30, 2014
134e240
wrap up solver.md -- add update info for all solvers with citations;
jeffdonahue Sep 3, 2014
9f19030
[docs] draft data
shelhamer Sep 3, 2014
b367317
[docs] suggest the CVPR14 deep learning tutorial for nice contrast
shelhamer Sep 3, 2014
09a1ce7
update doxygen config to stop warnings
shelhamer Sep 3, 2014
c4b9ec5
[docs] configure doxygen + docs script for docs/doxygen site output
shelhamer Sep 3, 2014
135786a
Merge pull request #973 from shelhamer/tutorial-docs
shelhamer Sep 3, 2014
9302b1d
[example] upgrade fine-tuning example to new transformation param
shelhamer Sep 3, 2014
f4727dc
Merge pull request #955 from kloudkl/data-layers
shelhamer Sep 3, 2014
66acf92
Update paths
dgolden1 Sep 3, 2014
be9a912
Point to local file, not github file
dgolden1 Sep 3, 2014
f9a6778
Correct reference to lenet_train_test.prototxt
dgolden1 Sep 3, 2014
a865a23
Inline latest lenet_solver.prototxt
dgolden1 Sep 3, 2014
945a849
Merge pull request #1031 from CellScope/mnist-tutorial-update
shelhamer Sep 3, 2014
0766cd9
[example] drop stale mentions of glog env var
shelhamer Sep 3, 2014
b136da5
[example] convert mnist name fix (crashes xcode compiler)
qipeng Sep 3, 2014
dc259d7
Merge pull request #1033 from qipeng/dev
jeffdonahue Sep 3, 2014
59fafb1
[docs] default setting for layout
sergeyk Sep 4, 2014
4175104
Merge pull request #1034 from sergeyk/dev
shelhamer Sep 4, 2014
e553573
[models] adding zoo readme; caffenet, alexnet, and rcnn models in zoo…
sergeyk Aug 13, 2014
bcc12ef
snapshot model with caffemodel extension
shelhamer Aug 28, 2014
39f7a4d
proofread model zoo
shelhamer Aug 28, 2014
84917d6
removing unneeded scripts from imagenet example
sergeyk Sep 3, 2014
a661001
Renaming CaffeNet model prototxts and unignoring models/*
sergeyk Sep 4, 2014
d5e9739
updating feature extraction example
sergeyk Sep 4, 2014
da715ea
removed mention of getting_pretrained_models page and old paths
sergeyk Sep 4, 2014
bc601e9
minor fixes to docs
sergeyk Sep 4, 2014
c6827bf
flickr style fine-tuning model (separated from example read me)
sergeyk Sep 4, 2014
51c4e6e
script to upload/update model info as gist
sergeyk Sep 4, 2014
2bdf516
add test_initialization option to allow skipping initial test
longjon Sep 4, 2014
d8f56fb
add SILENCE layer -- takes one or more inputs and produces no output
jeffdonahue Jul 4, 2014
f2324fe
Merge pull request #624 from jeffdonahue/squash-layer
jeffdonahue Sep 4, 2014
f7baf2b
fix model download advice and prototxt name for fine-tuning
shelhamer Sep 4, 2014
d46f3cd
[example] update ImageNet timing for K40
shelhamer Sep 4, 2014
adbea64
Merge pull request #917 from sergeyk/model_zoo
shelhamer Sep 4, 2014
e23ac45
HDF5 classification example
sergeyk Sep 4, 2014
a857001
fix fine-tuning example: paths, test acc., and total fine-tuning time
shelhamer Sep 4, 2014
f748aee
[fix] stop cloc complaint about cu type
shelhamer Sep 4, 2014
b5b02dc
[docs] fix formatting and other errors in loss & solver
jeffdonahue Sep 4, 2014
51dd4b2
added a two-layer network that gets higher accuracy
sergeyk Sep 5, 2014
7c78cdb
Gradient-based solver test fix
qipeng Sep 5, 2014
46aa65e
Merge pull request #1040 from qipeng/solver-test-fix
shelhamer Sep 5, 2014
ef042f6
Merge pull request #1039 from sergeyk/dev
shelhamer Sep 5, 2014
50d9d0d
Merge pull request #1036 from longjon/test-initialization-param
shelhamer Sep 5, 2014
e82e728
[docs] add titles
shelhamer Sep 5, 2014
530ec45
[docs] link tutorial
shelhamer Sep 5, 2014
ec861a5
[docs] fix br code
shelhamer Sep 5, 2014
ced3a37
relu,sigmoid,tanh
Yangqing Sep 5, 2014
4880a2b
more blob details
Yangqing Sep 5, 2014
1f64148
fix leaky relu
Yangqing Sep 5, 2014
cb4ae5e
update net
Yangqing Sep 5, 2014
f12a74a
neuron layers doc
Yangqing Sep 5, 2014
f15fc36
conv and pooling
Yangqing Sep 6, 2014
5eb8dd3
more layers
Yangqing Sep 6, 2014
3578d91
Added initial Hinge Loss
sguada Sep 6, 2014
64fa7ca
shift CUDA code out of common
shelhamer Sep 1, 2014
cd52392
groom proto: sort layer type parameters, put loss_weight after basics
shelhamer Sep 1, 2014
a3dcca2
add engine parameter for multiple computational strategies
shelhamer Sep 2, 2014
237560c
ifdef engine default
shelhamer Sep 2, 2014
98b4cd3
strategize Caffe convolution
shelhamer Sep 2, 2014
6332376
strategize pooling
shelhamer Sep 2, 2014
8e8872d
strategize relu, sigmoid, tanh
shelhamer Sep 2, 2014
dd958e0
strategize softmax
shelhamer Sep 2, 2014
791243f
grooming: drop pointless overrides, stub layer comments
shelhamer Sep 2, 2014
347fdbd
default engine to Caffe according to compile flag
shelhamer Sep 3, 2014
d5605ec
default engine to Caffe in case config is missing
shelhamer Sep 4, 2014
e05428f
revert engine switch for build to always include caffe engine
shelhamer Sep 6, 2014
e922d11
revert separate strategies: engines will extend the caffe standards
shelhamer Sep 6, 2014
4a42528
Merge pull request #1022 from shelhamer/engine
shelhamer Sep 7, 2014
68849e4
[docs] fixup the MathJax notation in tutorial/layers
longjon Sep 7, 2014
c099fd8
[doc] minor edits to convolution layer in tutorial
longjon Sep 7, 2014
4f977d0
[docs] fix pooling markdown and add some comments in tutorial
longjon Sep 7, 2014
85c9365
[docs] add LRN layer to tutorial/layers
longjon Sep 7, 2014
853d65a
[docs] split layer params in required/optional
longjon Sep 7, 2014
40fa5be
[docs] in tutorial/layers, Options -> Parameters
longjon Sep 7, 2014
1545628
[docs] tutorial/layers: brief descriptions of some loss layers
longjon Sep 7, 2014
bd13f32
[docs] tutorial/layers: clean up sample markdown
longjon Sep 7, 2014
cbc50e1
[docs] tutorial/layers: describe some more data layers
longjon Sep 7, 2014
b37f4f9
[docs] tutorial/layers: fix inner product sample
longjon Sep 7, 2014
3cf3df8
fix transform_param in mnist_autoencoder.prototxt
jeffdonahue Sep 7, 2014
fb0a3d0
remove uses of tmpnam
jeffdonahue Sep 5, 2014
3182b1c
add <cuda>/lib64 only if exists to suppress linker warnings
jeffdonahue Sep 7, 2014
1cb7040
enabled object file reusing in test builds
akosiorek Sep 5, 2014
37e55fa
cpp and cu files processed separately in test build
akosiorek Sep 6, 2014
9086df9
added common.cpp explicitly to tests
akosiorek Sep 7, 2014
77d9124
add cuDNN to build
shelhamer Sep 1, 2014
8819f59
call __signbit for CUDA >= 6.5 implementation
shelhamer Sep 4, 2014
d1b38ee
strategize cuDNN convolution
shelhamer Sep 2, 2014
00f5fa6
strategize cuDNN pooling
shelhamer Sep 6, 2014
14a9198
strategize cuDNN activations: ReLU, Sigmoid, TanH
shelhamer Sep 6, 2014
84bd1f5
strategize cuDNN softmax
shelhamer Sep 6, 2014
9e3d86f
CUDNN_CHECK
shelhamer Sep 6, 2014
c65d5a0
report cuDNN error string
shelhamer Sep 6, 2014
359197b
[docs] include cuDNN in installation and performance reference
shelhamer Sep 7, 2014
396da71
Repair crash in conv_layer due to weight pointer being NULL.
jyegerlehner Sep 8, 2014
a739cda
Fix more lint.
jyegerlehner Sep 8, 2014
5ab3d97
Merge pull request #1048 from jyegerlehner/conv_layer-init-weight
jeffdonahue Sep 8, 2014
adaad52
Merge pull request #1045 from akosiorek/origin/dev
jeffdonahue Sep 8, 2014
68e2657
Fixed CMake script of FindOpenBLAS.
niuzhiheng Sep 8, 2014
ae85996
Merge pull request #1049 from niuzhiheng/dev
jeffdonahue Sep 8, 2014
3bafe2f
Merge pull request #1046 from shelhamer/cudnn
shelhamer Sep 8, 2014
99c4ed5
[lint] cuDNN conv declaration
shelhamer Sep 8, 2014
2d88103
linecount counts more dirs than just src/
jeffdonahue Sep 8, 2014
e855bb9
Merge pull request #1044 from jeffdonahue/no-tmpnam
jeffdonahue Sep 8, 2014
8cfd587
Merge pull request #1050 from jeffdonahue/linecount-more
jeffdonahue Sep 8, 2014
63bad31
Revert "call __signbit for CUDA >= 6.5 implementation" -- doesn't
jeffdonahue Sep 8, 2014
fc921bf
Back-merge to dev for slides
shelhamer Sep 8, 2014
761c815
Implemented elementwise max layer
to3i Jul 11, 2014
6bda406
lint & reduce gradient check stepsize to pass checks
jeffdonahue Sep 8, 2014
d149c9a
Added contrastive loss layer, associated tests, and a siamese network…
Aug 21, 2014
133b4db
Merge pull request #1053 from jeffdonahue/to3i-elem_max_layer
jeffdonahue Sep 10, 2014
be9c5bd
Fix lmbdb travis with openldap
ste-m5s Sep 10, 2014
15538f8
Merge pull request #1067 from bhack/lmdb
jeffdonahue Sep 11, 2014
4ce6e43
restore "red X" build failures in Travis
jeffdonahue Sep 8, 2014
f036ef4
add -fPIC flag to CMake build
jeffdonahue Sep 11, 2014
c69b3b4
Merge pull request #1051 from jeffdonahue/travis-red-errors
jeffdonahue Sep 11, 2014
3a69e22
Add ppa for gflag and glog
ste-m5s Sep 12, 2014
431a516
Update CUDA to version 6.5 in the Travis install script
kloudkl Sep 12, 2014
d54846c
fix out-of-date next ID comment for SolverParameter
longjon Sep 14, 2014
e294f6a
fix spelling error in caffe.proto
longjon Sep 14, 2014
503ac0b
Fix comments
bhack Sep 14, 2014
8de9ab0
Fix a little typo
bhack Sep 14, 2014
aa10e72
Merge pull request #1076 from kloudkl/cuda-6.5
shelhamer Sep 14, 2014
2da6bc9
Merge pull request #1077 from bhack/glog_ppa
jeffdonahue Sep 14, 2014
bbd166e
fix caffe train GPU initialization
longjon Sep 15, 2014
1f4e039
Merge pull request #1083 from longjon/fix-solver-gpu-init
shelhamer Sep 15, 2014
0120476
[example] update paths in net surgery
shelhamer Sep 16, 2014
4e6d977
[fix] snapshot model weights as .caffemodel, solver state as .solvers…
shelhamer Sep 16, 2014
06d7310
set up datum size for WindowDataLayer
ronghanghu Sep 16, 2014
4b1f53c
Merge pull request #1091 from ronghanghu/fix_window_data_layer
jeffdonahue Sep 16, 2014
0fb2faf
Merge pull request #1088 from shelhamer/fix-solverstate-filename
longjon Sep 16, 2014
aecab61
[Bugfix] Move error checking closer to file read
dgolden1 Sep 9, 2014
a77ca76
Merge pull request #1093 from CellScope/io-cant-load-error-msg
shelhamer Sep 16, 2014
3fc22b3
Update readme.md files of cifar10 and mnist examples. Fixed broken li…
cNikolaou Sep 17, 2014
1096dde
Updated mnist/readme.md file with additional information.
cNikolaou Sep 18, 2014
e4d48c5
test convolution against explicit reference implementation
shelhamer Sep 17, 2014
355af16
test convolution by random weights for robustness
shelhamer Sep 18, 2014
18ca362
[docs] comment ConvolutionLayer
shelhamer Sep 18, 2014
9a7f0a0
[docs] lenet grooming
shelhamer Sep 18, 2014
c3a69b7
Merge pull request #1100 from cNikolaou/issue1099
shelhamer Sep 18, 2014
8dac339
Merge pull request #1104 from shelhamer/conv-comments-tests
shelhamer Sep 18, 2014
69bf6b5
use Blob directly instead of shared_ptr for EltwiseLayer::max_idx_
longjon Sep 11, 2014
3194bb1
add abstract Layer::Reshape, and document the new method protocol
longjon Sep 10, 2014
4fff966
don't reallocate blobs when shrinking memory use
longjon Jul 2, 2014
87de5ed
enable reshaping in the forward pass
longjon Sep 10, 2014
5ce519c
separate setTensor4dDesc from createTensor4dDesc
longjon Sep 11, 2014
d7e8f2a
separate setConvolutionDesc from createConvolutionDesc
longjon Sep 11, 2014
4b34c72
split off Reshape for data layers
longjon Sep 10, 2014
62bc0a8
split off Reshape for loss layers
longjon Sep 11, 2014
256209d
split off Reshape for neuron layers
longjon Sep 11, 2014
07d6246
split off Reshape for common layers
longjon Sep 11, 2014
6c63b8c
split off Reshape for vision layers
longjon Sep 11, 2014
d2de2ee
call Reshape in Layer::SetUp
longjon Sep 12, 2014
4f1b668
default LayerSetUp to no-op instead of NOT_IMPLEMENTED
longjon Sep 12, 2014
db5bb15
test net reshaping
longjon Jul 2, 2014
24350a6
include Reshape in caffe time
longjon Sep 12, 2014
490077e
add Net::Reshape for only reshaping
longjon Sep 12, 2014
fdf2de1
[pycaffe] expose Net::Reshape
longjon Sep 12, 2014
0b5e11d
[docs] clarify the use of Blob::Reshape a bit
longjon Sep 12, 2014
d833ab3
check that LRN's local_size is odd as the current implementation requ…
longjon Sep 12, 2014
8008533
Merge pull request #594 from longjon/layer-reshaping
shelhamer Sep 18, 2014
08d7f8c
[model zoo] download gist script
sergeyk Sep 18, 2014
58dce0e
Merge pull request #1110 from sergeyk/dev
shelhamer Sep 18, 2014
4e02e06
[model zoo] download from gist grooming
shelhamer Sep 19, 2014
7a507d6
[model zoo] ignore models -- only for reference or zoo
shelhamer Sep 19, 2014
a920a14
[example] resurrect imagenet training scripts
shelhamer Sep 19, 2014
e146423
[docs] order ipython notebooks
shelhamer Sep 19, 2014
7c3c089
Merge pull request #959 from nickcarlevaris/contrastive_loss
shelhamer Sep 19, 2014
403b56b
[example] groom siamese notebook
shelhamer Sep 19, 2014
89fd7da
relax precision of gradient-based solver tests
shelhamer Sep 19, 2014
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2,335 changes: 2,335 additions & 0 deletions .Doxyfile

Large diffs are not rendered by default.

24 changes: 22 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -47,18 +47,38 @@ python/caffe/proto/
# User's build configuration
Makefile.config

# Data and examples are either
# Data and models are either
# 1. reference, and not casually committed
# 2. custom, and live on their own unless they're deliberated contributed
data/*
examples/*
models/*
*.caffemodel
*.solverstate
*.binaryproto
*leveldb
*lmdb

# LevelDB files
*.sst
*.ldb
LOCK
LOG*
CURRENT
MANIFEST-*

# Generated documentation
docs/_site
docs/gathered
_site
doxygen
docs/dev

# Sublime Text settings
*.sublime-workspace
*.sublime-project

# Eclipse Project settings
*.*project

# CMake generated files
*.gen.cmake
55 changes: 15 additions & 40 deletions .travis.yml
Original file line number Diff line number Diff line change
@@ -1,56 +1,31 @@
# Use a build matrix to do two builds in parallel:
# one using CMake, and one using make.
env:
matrix:
- WITH_CUDA=false WITH_CMAKE=false
- WITH_CUDA=false WITH_CMAKE=true
- WITH_CUDA=true WITH_CMAKE=false
- WITH_CUDA=true WITH_CMAKE=true

language: cpp

# Cache Ubuntu apt packages.
cache: apt

compiler:
- gcc
# Disable clang build: doesn't seem to work on Linux.
# (@jeffdonahue: Travis buildbot's failure behavior is similar to what I see
# building on Linux.)
# - clang
compiler: gcc

before_install:
- echo $LANG
- echo $LC_ALL
- sudo apt-get -y update
- sudo apt-get -y install wget git curl python-dev python-numpy libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libboost-all-dev libhdf5-serial-dev protobuf-compiler libatlas-dev libatlas-base-dev bc
- export NUM_THREADS=4
- export SCRIPTS=./scripts/travis

install:
- wget https://google-glog.googlecode.com/files/glog-0.3.3.tar.gz -O /tmp/glog-0.3.3.tar.gz && tar -C /tmp -xzvf /tmp/glog-0.3.3.tar.gz && rm /tmp/glog-0.3.3.tar.gz
- cd /tmp/glog-0.3.3 && ./configure && make && sudo make install && cd -
- wget https://github.com/schuhschuh/gflags/archive/master.zip -O /tmp/gflags-master.zip && pushd /tmp/ && unzip gflags-master.zip && cd gflags-master && mkdir build && cd build && export CXXFLAGS="-fPIC" && cmake .. && make VERBOSE=1 && sudo make install && popd
- curl http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1204/x86_64/cuda-repo-ubuntu1204_6.0-37_amd64.deb -o /tmp/cuda_install.deb && sudo dpkg -i /tmp/cuda_install.deb && rm /tmp/cuda_install.deb
- sudo apt-get -y update
# Install the minimal CUDA subpackages required to test Caffe build.
# For a full CUDA installation, add 'cuda' to the list of packages.
- sudo apt-get -y install cuda-core-6-0 cuda-extra-libs-6-0
# Create CUDA symlink at /usr/local/cuda
# (This would normally be created by the CUDA installer, but we create it
# manually since we did a partial installation.)
- sudo ln -s /usr/local/cuda-6.0 /usr/local/cuda
- curl https://gitorious.org/mdb/mdb/archive/7f038d0f15bec57b4c07aa3f31cd5564c88a1897.tar.gz -o /tmp/mdb.tar.gz && tar -C /tmp -xzvf /tmp/mdb.tar.gz && rm /tmp/mdb.tar.gz
- cd /tmp/mdb-mdb/libraries/liblmdb/ && make && sudo make install && cd -
- sudo -E $SCRIPTS/travis_install.sh

before_script:
- mv Makefile.config.example Makefile.config
- export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
- export NUM_THREADS=4
- if ! $WITH_CMAKE; then $SCRIPTS/travis_setup_makefile_config.sh; fi

script:
# CPU-GPU: build only.
- export CPU_ONLY=0
- make --keep-going --jobs=$NUM_THREADS all
- make clean
# CPU-only: comprehensive.
- export CPU_ONLY=1
- make --keep-going --jobs=$NUM_THREADS all test warn lint
- make runtest
- make --jobs=$NUM_THREADS all
- make --jobs=$NUM_THREADS test
- make --jobs=$NUM_THREADS warn
- make --jobs=$NUM_THREADS lint
- make --jobs=$NUM_THREADS pycaffe
script: $SCRIPTS/travis_build_and_test.sh

notifications:
# Emails are sent to the committer's git-configured email address by default,
Expand Down
90 changes: 90 additions & 0 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
cmake_minimum_required(VERSION 2.8.8)
project( Caffe )

### Build Options ##########################################################################

option(CPU_ONLY "Build Caffe without GPU support" OFF)
option(BUILD_PYTHON "Build Python wrapper" OFF)
option(BUILD_MATLAB "Build Matlab wrapper" OFF)
option(BUILD_EXAMPLES "Build examples" ON)
option(BUILD_SHARED_LIBS "Build SHARED libs if ON and STATIC otherwise" OFF)

if(NOT BLAS)
set(BLAS atlas)
endif()

if(NOT CUDA_TEST_DEVICE)
set(CUDA_TEST_DEVICE -1)
endif()

# Install Prefix
if (CMAKE_INSTALL_PREFIX_INITIALIZED_TO_DEFAULT)
set (CMAKE_INSTALL_PREFIX "${CMAKE_BINARY_DIR}/install" CACHE PATH "Default install path" FORCE )
endif()

### Configuration ###########################################################################
# Compiler Flags
set(CMAKE_CXX_COMPILER_FLAGS ${CMAKE_CXX_COMPILER_FLAGS} -Wall)
set(CMAKE_CXX_FLAGS ${CMAKE_CXX_FLAGS} -fPIC) # set global flags
set(CMAKE_CXX_FLAGS_DEBUG ${CMAKE_CXX_FLAGS_DEBUG}) # set debug flags
set(CMAKE_CXX_FLAGS_RELEASE ${CMAKE_CXX_FLAGS_RELEASE}) # set release flags

# Global Definitions
if(CPU_ONLY)
add_definitions(-DCPU_ONLY)
endif()

# Include Directories
set(${PROJECT_NAME}_INCLUDE_DIRS ${CMAKE_SOURCE_DIR}/include)
include_directories(${${PROJECT_NAME}_INCLUDE_DIRS})
include_directories(${CMAKE_SOURCE_DIR}/src)

# CMake Scripts dir
set(CMAKE_SCRIPT_DIR ${CMAKE_SOURCE_DIR}/CMakeScripts)

# CMake module path for custom module finding
set( CMAKE_MODULE_PATH ${CMAKE_MODULE_PATH} ${CMAKE_SCRIPT_DIR})

# CUDA is required globally
if(NOT CPU_ONLY)
find_package(CUDA 5.5 REQUIRED)
include_directories(${CUDA_INCLUDE_DIRS})
endif()

### Subdirectories ##########################################################################

add_subdirectory(src/gtest)
add_subdirectory(src/caffe)
add_subdirectory(tools)

if(BUILD_EXAMPLES)
message(STATUS "Examples enabled")
add_subdirectory(examples)
endif()

if(BUILD_PYTHON)
message(STATUS "Python enabled")
add_subdirectory(python)
endif()

if(BUILD_MATLAB)
message(STATUS "Matlab enabled")
add_subdirectory(matlab)
endif()

### Lint Target Setup ##########################################################################

set(LINT_TARGET lint)
set(LINT_SCRIPT ${CMAKE_SCRIPT_DIR}/lint.cmake)
add_custom_target(
${LINT_TARGET}
COMMAND ${CMAKE_COMMAND} -P ${LINT_SCRIPT}
)

### Install #################################################################################

# Install Includes
file(GLOB folders ${${PROJECT_NAME}_INCLUDE_DIRS}/*)
install(DIRECTORY ${folders} DESTINATION include)


61 changes: 61 additions & 0 deletions CMakeScripts/FindAtlas.cmake
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
# Find the Atlas (and Lapack) libraries
#
# The following variables are optionally searched for defaults
# Atlas_ROOT_DIR: Base directory where all Atlas components are found
#
# The following are set after configuration is done:
# Atlas_FOUND
# Atlas_INCLUDE_DIRS
# Atlas_LIBRARIES
# Atlas_LIBRARYRARY_DIRS

set(Atlas_INCLUDE_SEARCH_PATHS
/usr/include/atlas
/usr/include/atlas-base
$ENV{Atlas_ROOT_DIR}
$ENV{Atlas_ROOT_DIR}/include
)

set(Atlas_LIB_SEARCH_PATHS
/usr/lib/atlas
/usr/lib/atlas-base
$ENV{Atlas_ROOT_DIR}
$ENV{Atlas_ROOT_DIR}/lib
)

find_path(Atlas_CBLAS_INCLUDE_DIR NAMES cblas.h PATHS ${Atlas_INCLUDE_SEARCH_PATHS})
find_path(Atlas_CLAPACK_INCLUDE_DIR NAMES clapack.h PATHS ${Atlas_INCLUDE_SEARCH_PATHS})
find_library(Atlas_CBLAS_LIBRARY NAMES ptcblas_r ptcblas cblas_r cblas PATHS ${Atlas_LIB_SEARCH_PATHS})
find_library(Atlas_BLAS_LIBRARY NAMES atlas_r atlas PATHS ${Atlas_LIB_SEARCH_PATHS})
find_library(Atlas_LAPACK_LIBRARY NAMES alapack_r alapack lapack_atlas PATHS ${Atlas_LIB_SEARCH_PATHS})

set(LOOKED_FOR

Atlas_CBLAS_INCLUDE_DIR
Atlas_CLAPACK_INCLUDE_DIR

Atlas_CBLAS_LIBRARY
Atlas_BLAS_LIBRARY
Atlas_LAPACK_LIBRARY
)

include(FindPackageHandleStandardArgs)
find_package_handle_standard_args(Atlas DEFAULT_MSG ${LOOKED_FOR})

if(ATLAS_FOUND)

mark_as_advanced(${LOOKED_FOR})

set(Atlas_INCLUDE_DIR
${Atlas_CBLAS_INCLUDE_DIR}
${Atlas_CLAPACK_INCLUDE_DIR}
)

set(Atlas_LIBRARIES
${Atlas_LAPACK_LIBRARY}
${Atlas_CBLAS_LIBRARY}
${Atlas_BLAS_LIBRARY}
)

endif(ATLAS_FOUND)

48 changes: 48 additions & 0 deletions CMakeScripts/FindGFlags.cmake
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
# - Try to find GFLAGS
#
# The following variables are optionally searched for defaults
# GFLAGS_ROOT_DIR: Base directory where all GFLAGS components are found
#
# The following are set after configuration is done:
# GFLAGS_FOUND
# GFLAGS_INCLUDE_DIRS
# GFLAGS_LIBRARIES
# GFLAGS_LIBRARYRARY_DIRS

include(FindPackageHandleStandardArgs)

set(GFLAGS_ROOT_DIR "" CACHE PATH "Folder contains Gflags")

# We are testing only a couple of files in the include directories
if(WIN32)
find_path(GFLAGS_INCLUDE_DIR gflags/gflags.h
PATHS ${GFLAGS_ROOT_DIR}/src/windows)
else()
find_path(GFLAGS_INCLUDE_DIR gflags/gflags.h
PATHS ${GFLAGS_ROOT_DIR})
endif()

if(MSVC)
find_library(GFLAGS_LIBRARY_RELEASE
NAMES libgflags
PATHS ${GFLAGS_ROOT_DIR}
PATH_SUFFIXES Release)

find_library(GFLAGS_LIBRARY_DEBUG
NAMES libgflags-debug
PATHS ${GFLAGS_ROOT_DIR}
PATH_SUFFIXES Debug)

set(GFLAGS_LIBRARY optimized ${GFLAGS_LIBRARY_RELEASE} debug ${GFLAGS_LIBRARY_DEBUG})
else()
find_library(GFLAGS_LIBRARY gflags)
endif()

find_package_handle_standard_args(GFLAGS DEFAULT_MSG
GFLAGS_INCLUDE_DIR GFLAGS_LIBRARY)


if(GFLAGS_FOUND)
set(GFLAGS_INCLUDE_DIRS ${GFLAGS_INCLUDE_DIR})
set(GFLAGS_LIBRARIES ${GFLAGS_LIBRARY})
endif()
48 changes: 48 additions & 0 deletions CMakeScripts/FindGlog.cmake
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
# - Try to find Glog
#
# The following variables are optionally searched for defaults
# GLOG_ROOT_DIR: Base directory where all GLOG components are found
#
# The following are set after configuration is done:
# GLOG_FOUND
# GLOG_INCLUDE_DIRS
# GLOG_LIBRARIES
# GLOG_LIBRARYRARY_DIRS

include(FindPackageHandleStandardArgs)

set(GLOG_ROOT_DIR "" CACHE PATH "Folder contains Google glog")

if(WIN32)
find_path(GLOG_INCLUDE_DIR glog/logging.h
PATHS ${GLOG_ROOT_DIR}/src/windows)
else()
find_path(GLOG_INCLUDE_DIR glog/logging.h
PATHS ${GLOG_ROOT_DIR})
endif()

if(MSVC)
find_library(GLOG_LIBRARY_RELEASE libglog_static
PATHS ${GLOG_ROOT_DIR}
PATH_SUFFIXES Release)

find_library(GLOG_LIBRARY_DEBUG libglog_static
PATHS ${GLOG_ROOT_DIR}
PATH_SUFFIXES Debug)

set(GLOG_LIBRARY optimized ${GLOG_LIBRARY_RELEASE} debug ${GLOG_LIBRARY_DEBUG})
else()
find_library(GLOG_LIBRARY glog
PATHS ${GLOG_ROOT_DIR}
PATH_SUFFIXES
lib
lib64)
endif()

find_package_handle_standard_args(GLOG DEFAULT_MSG
GLOG_INCLUDE_DIR GLOG_LIBRARY)

if(GLOG_FOUND)
set(GLOG_INCLUDE_DIRS ${GLOG_INCLUDE_DIR})
set(GLOG_LIBRARIES ${GLOG_LIBRARY})
endif()
Loading