Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix crash when pairing an odd number of devices without P2P (BVLC/github issue #3531) #3586

Closed
wants to merge 78 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
78 commits
Select commit Hold shift + click to select a range
5241487
Fix crash when pairing an odd number of devices without P2P (BVLC/git…
Jan 22, 2016
6537309
improved to load RGB image as grayscale image
tishibas Mar 17, 2015
7aa3406
fixing the database param
ajkl Aug 14, 2015
68ebc39
Net: expose param_display_names_
jeffdonahue Feb 15, 2015
a7b8c57
Don't attempt to write CSV if there are no lines to write
dgolden1 Nov 5, 2015
6332e7f
Add a -c to wget so that it continues interrupted downloads
sjbrown Nov 7, 2015
f976d9c
Fix loss of last iteration when average_loss > 1
bwilbertz Sep 30, 2015
999a043
ELU layer with basic tests
mohomran Nov 26, 2015
baa4291
- Fix to cmake build for clang
jczaja Dec 14, 2015
c85f0b2
TestDataTransformer: fix some memory leaks caused by use of 'new'
jeffdonahue Dec 30, 2015
074128b
fixbug #issues/3494 No to_python (by-value) converter found for C++ t…
Jan 20, 2016
a268187
add register Net and Solver
Jan 21, 2016
b6c1428
Add makefile config option for linking Python 3 libraries
cooperra Jan 21, 2016
b619796
copy proto to distribute directory
junshi15 Jan 22, 2016
8adf15e
Add ChannelwiseAffine for batch norm
ducha-aiki Jan 14, 2016
e7b14e5
Version 1.0.0-rc3
lukeyeager Jan 22, 2016
1352522
Separation and generalization of ChannelwiseAffineLayer into BiasLayer
jeffdonahue Jan 22, 2016
42037cd
show Caffe's version from MatCaffe
ronghanghu Jan 23, 2016
b0366e3
Updated import to make it work with pydotplus
Austriker Jan 21, 2016
bd6c51b
Prevent in-place computation in ReshapeLayer and FlattenLayer
kkhoot Nov 27, 2015
5997b3d
Remove unnecessary CAFFE_TEST_CUDA_PROP declarations
jeffdonahue Jan 26, 2016
aaa2d46
Update mnist readme.md: scale moved to transform_param
madan-ram Jul 23, 2015
8092e06
Make the two separate build systems clearer in the documentation
keir Jun 26, 2015
c615587
Remove incorrect cast of gemm int arg to Dtype in BiasLayer
jeffdonahue Jan 27, 2016
2cf6d2e
use relative paths on making build/tools/ links
gdh1995 Jan 13, 2016
0474c40
Nicely prints GPU names
drnikolaev Feb 2, 2016
261e652
bugfix for incorrect behaviour in caffe_parse_linker_libs function wh…
Feb 9, 2016
84f933f
Remove useless LevelDB include
flx42 Feb 16, 2016
b5c1b40
Fix a typo in docs
prayagverma Feb 18, 2016
f5f64d5
tranpose parameter added to IP layer to support tied weights in an au…
kashefy Jan 29, 2016
beed648
fix library install name on OSX for relative path linking
shelhamer Feb 20, 2016
8a00f49
Fix OSX El Capitan CUDA incompatibility, by adding lib to rpath
mohamed-ezz Feb 5, 2016
fd052a7
removing all references to Blob.num property (that assumes Blob is 4D…
Feb 23, 2016
f808af7
[example] improve classification notebook
longjon Feb 24, 2016
af71bb0
[data] get_mnist.sh rewrite; prevents prompt in tutorial notebooks
longjon Feb 5, 2016
c3e1be6
[example] improve learning LeNet notebook
longjon Feb 24, 2016
4d62595
[example] improve fine-tuning notebook
jeffdonahue Feb 24, 2016
d495a28
[example] improve brewing logreg notebook
jeffdonahue Feb 24, 2016
4a1e8e5
CMake: Do not include "${PROJECT_BINARY_DIR}/include" with SYSTEM option
olesalscheider Feb 24, 2016
6f5d1be
add InputLayer for Net input
shelhamer Oct 16, 2015
af1e8c6
deprecate input fields and upgrade automagically
shelhamer Oct 16, 2015
35fca23
drop Net inputs + Forward with bottoms
shelhamer Oct 17, 2015
159ceb8
collect Net inputs from Input layers
shelhamer Dec 3, 2015
f4eb892
[examples] switch examples + models to Input layers
shelhamer Dec 4, 2015
7a9e40d
Deprecate ForwardPrefilled(), Forward(bottom, loss) in lieu of dropping
shelhamer Feb 27, 2016
e0d4f26
Add Dockerfiles for creating Caffe executable images.
Jan 5, 2016
f6c4879
fix flags in #3518 for nvidia-docker
shelhamer Feb 27, 2016
52d3434
supporting N-D Blobs in Dropout layer Reshape
Feb 25, 2016
3324d45
Use 'six' library to ensure python3 compliance.
Feb 29, 2016
49cc7fb
NetSpec: allow setting blob names by string
Feb 29, 2016
c007eb8
Added tutorial on how to use python datalayers and multilabel classif…
beijbom Dec 21, 2015
91d3425
Refactor and improve code style.
Feb 18, 2016
490432d
Finalized tutorial. Removed asyncronous layer.
beijbom Feb 27, 2016
7133cf6
output all logging from upgrade net tools
shelhamer Mar 1, 2016
283fe12
check all net upgrade conditions
shelhamer Mar 1, 2016
89ec6b9
fix input field -> input layer net upgrade: only convert full defs
shelhamer Mar 1, 2016
d2b07e4
refuse to upgrade net with layer/layers inconsistency
shelhamer Mar 1, 2016
ec2d346
- doc and cmake update MKL related
jczaja Mar 1, 2016
b90ac1f
[example] groom multilabel notebook title, order
shelhamer Mar 1, 2016
debe545
minor mistakes removed
Mar 2, 2016
89b921c
Removed lint script reference to non-existant caffe_memcpy function.
BlGene Mar 4, 2016
0757f7c
[travis] force protobuf 3.0.0b2 for Python 3
longjon Mar 4, 2016
afe785e
[pycaffe] add coord_map.py for computing induced coordinate transform
longjon Jan 30, 2016
72f6838
[pycaffe] document, style, and complete coord_map
shelhamer Feb 28, 2016
84b3f07
[pycaffe] align coord_map and #3570 Crop layer
shelhamer Feb 28, 2016
6e7b06a
[pycaffe] test coord_map
shelhamer Mar 4, 2016
c71231e
add check and find GPU device utilities
junshi15 Jan 22, 2016
486d432
add CropLayer: crop blob to another blob's dimensions with offsets
longjon Dec 27, 2014
c56dd0f
Extend Crop to N-D, changed CropParameter.
BlGene Jan 19, 2016
985f2ea
Crop: fixes, tests and negative axis indexing.
BlGene Feb 29, 2016
e479467
Crop: more tests and test tuning.
shelhamer Mar 4, 2016
00ca028
split p2psync::run()
junshi15 Jan 22, 2016
05c9dd4
[build] travis: remove existing conda dir
Mar 9, 2016
011c69c
Update Makefile: Changed MKL_DIR to MKLROOT
jreniecki Mar 15, 2016
b260402
Use lazy initialization to reuse orderd dict/list creations to save t…
Mar 30, 2016
6f96e10
test_net.cpp: add TestForcePropagateDown
jeffdonahue Apr 4, 2016
3a63d20
Net: setting `propagate_down: true` forces backprop
jeffdonahue Jan 27, 2016
d2617ef
Fix initialization of deconvolution layer parameters from python net_…
Apr 6, 2016
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,11 @@ endif()
# ---[ Caffe project
project(Caffe C CXX)

# ---[ Caffe version
set(CAFFE_TARGET_VERSION "1.0.0-rc3")
set(CAFFE_TARGET_SOVERSION "1.0.0-rc3")
add_definitions(-DCAFFE_VERSION=${CAFFE_TARGET_VERSION})

# ---[ Using cmake scripts and modules
list(APPEND CMAKE_MODULE_PATH ${PROJECT_SOURCE_DIR}/cmake/Modules)

Expand Down Expand Up @@ -42,6 +47,8 @@ if(UNIX OR APPLE)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fPIC -Wall")
endif()

caffe_set_caffe_link()

if(USE_libstdcpp)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -stdlib=libstdc++")
message("-- Warning: forcing libstdc++ (controlled by USE_libstdcpp option in cmake)")
Expand Down
53 changes: 38 additions & 15 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -29,9 +29,17 @@ SRC_DIRS := $(shell find * -type d -exec bash -c "find {} -maxdepth 1 \
\( -name '*.cpp' -o -name '*.proto' \) | grep -q ." \; -print)

# The target shared library name
LIBRARY_NAME := $(PROJECT)
LIB_BUILD_DIR := $(BUILD_DIR)/lib
STATIC_NAME := $(LIB_BUILD_DIR)/lib$(PROJECT).a
DYNAMIC_NAME := $(LIB_BUILD_DIR)/lib$(PROJECT).so
STATIC_NAME := $(LIB_BUILD_DIR)/lib$(LIBRARY_NAME).a
DYNAMIC_VERSION_MAJOR := 1
DYNAMIC_VERSION_MINOR := 0
DYNAMIC_VERSION_REVISION := 0-rc3
DYNAMIC_NAME_SHORT := lib$(LIBRARY_NAME).so
#DYNAMIC_SONAME_SHORT := $(DYNAMIC_NAME_SHORT).$(DYNAMIC_VERSION_MAJOR)
DYNAMIC_VERSIONED_NAME_SHORT := $(DYNAMIC_NAME_SHORT).$(DYNAMIC_VERSION_MAJOR).$(DYNAMIC_VERSION_MINOR).$(DYNAMIC_VERSION_REVISION)
DYNAMIC_NAME := $(LIB_BUILD_DIR)/$(DYNAMIC_VERSIONED_NAME_SHORT)
COMMON_FLAGS += -DCAFFE_VERSION=$(DYNAMIC_VERSION_MAJOR).$(DYNAMIC_VERSION_MINOR).$(DYNAMIC_VERSION_REVISION)

##############################
# Get all source files
Expand Down Expand Up @@ -191,7 +199,7 @@ ifeq ($(USE_OPENCV), 1)
endif

endif
PYTHON_LIBRARIES := boost_python python2.7
PYTHON_LIBRARIES ?= boost_python python2.7
WARNINGS := -Wall -Wno-sign-compare

##############################
Expand Down Expand Up @@ -240,6 +248,8 @@ ifeq ($(UNAME), Linux)
LINUX := 1
else ifeq ($(UNAME), Darwin)
OSX := 1
OSX_MAJOR_VERSION := $(shell sw_vers -productVersion | cut -f 1 -d .)
OSX_MINOR_VERSION := $(shell sw_vers -productVersion | cut -f 2 -d .)
endif

# Linux
Expand All @@ -253,6 +263,7 @@ ifeq ($(LINUX), 1)
# boost::thread is reasonably called boost_thread (compare OS X)
# We will also explicitly add stdc++ to the link target.
LIBRARIES += boost_thread stdc++
VERSIONFLAGS += -Wl,-soname,$(DYNAMIC_VERSIONED_NAME_SHORT) -Wl,-rpath,$(ORIGIN)/../lib
endif

# OS X:
Expand All @@ -268,14 +279,22 @@ ifeq ($(OSX), 1)
endif
# clang throws this warning for cuda headers
WARNINGS += -Wno-unneeded-internal-declaration
# 10.11 strips DYLD_* env vars so link CUDA (rpath is available on 10.5+)
OSX_10_OR_LATER := $(shell [ $(OSX_MAJOR_VERSION) -ge 10 ] && echo true)
OSX_10_5_OR_LATER := $(shell [ $(OSX_MINOR_VERSION) -ge 5 ] && echo true)
ifeq ($(OSX_10_OR_LATER),true)
ifeq ($(OSX_10_5_OR_LATER),true)
LDFLAGS += -Wl,-rpath,$(CUDA_LIB_DIR)
endif
endif
endif
# gtest needs to use its own tuple to not conflict with clang
COMMON_FLAGS += -DGTEST_USE_OWN_TR1_TUPLE=1
# boost::thread is called boost_thread-mt to mark multithreading on OS X
LIBRARIES += boost_thread-mt
# we need to explicitly ask for the rpath to be obeyed
DYNAMIC_FLAGS := -install_name @rpath/libcaffe.so
ORIGIN := @loader_path
VERSIONFLAGS += -Wl,-install_name,@rpath/$(DYNAMIC_VERSIONED_NAME_SHORT) -Wl,-rpath,$(ORIGIN)/../../build/lib
else
ORIGIN := \$$ORIGIN
endif
Expand Down Expand Up @@ -345,9 +364,9 @@ ifeq ($(BLAS), mkl)
# MKL
LIBRARIES += mkl_rt
COMMON_FLAGS += -DUSE_MKL
MKL_DIR ?= /opt/intel/mkl
BLAS_INCLUDE ?= $(MKL_DIR)/include
BLAS_LIB ?= $(MKL_DIR)/lib $(MKL_DIR)/lib/intel64
MKLROOT ?= /opt/intel/mkl
BLAS_INCLUDE ?= $(MKLROOT)/include
BLAS_LIB ?= $(MKLROOT)/lib $(MKLROOT)/lib/intel64
else ifeq ($(BLAS), open)
# OpenBLAS
LIBRARIES += openblas
Expand Down Expand Up @@ -478,7 +497,7 @@ py: $(PY$(PROJECT)_SO) $(PROTO_GEN_PY)
$(PY$(PROJECT)_SO): $(PY$(PROJECT)_SRC) $(PY$(PROJECT)_HXX) | $(DYNAMIC_NAME)
@ echo CXX/LD -o $@ $<
$(Q)$(CXX) -shared -o $@ $(PY$(PROJECT)_SRC) \
-o $@ $(LINKFLAGS) -l$(PROJECT) $(PYTHON_LDFLAGS) \
-o $@ $(LINKFLAGS) -l$(LIBRARY_NAME) $(PYTHON_LDFLAGS) \
-Wl,-rpath,$(ORIGIN)/../../build/lib

mat$(PROJECT): mat
Expand Down Expand Up @@ -542,7 +561,8 @@ $(ALL_BUILD_DIRS): | $(BUILD_DIR_LINK)

$(DYNAMIC_NAME): $(OBJS) | $(LIB_BUILD_DIR)
@ echo LD -o $@
$(Q)$(CXX) -shared -o $@ $(OBJS) $(LINKFLAGS) $(LDFLAGS) $(DYNAMIC_FLAGS)
$(Q)$(CXX) -shared -o $@ $(OBJS) $(VERSIONFLAGS) $(LINKFLAGS) $(LDFLAGS)
@ cd $(BUILD_DIR)/lib; rm -f $(DYNAMIC_NAME_SHORT); ln -s $(DYNAMIC_VERSIONED_NAME_SHORT) $(DYNAMIC_NAME_SHORT)

$(STATIC_NAME): $(OBJS) | $(LIB_BUILD_DIR)
@ echo AR -o $@
Expand Down Expand Up @@ -573,33 +593,33 @@ $(TEST_ALL_BIN): $(TEST_MAIN_SRC) $(TEST_OBJS) $(GTEST_OBJ) \
| $(DYNAMIC_NAME) $(TEST_BIN_DIR)
@ echo CXX/LD -o $@ $<
$(Q)$(CXX) $(TEST_MAIN_SRC) $(TEST_OBJS) $(GTEST_OBJ) \
-o $@ $(LINKFLAGS) $(LDFLAGS) -l$(PROJECT) -Wl,-rpath,$(ORIGIN)/../lib
-o $@ $(LINKFLAGS) $(LDFLAGS) -l$(LIBRARY_NAME) -Wl,-rpath,$(ORIGIN)/../lib

$(TEST_CU_BINS): $(TEST_BIN_DIR)/%.testbin: $(TEST_CU_BUILD_DIR)/%.o \
$(GTEST_OBJ) | $(DYNAMIC_NAME) $(TEST_BIN_DIR)
@ echo LD $<
$(Q)$(CXX) $(TEST_MAIN_SRC) $< $(GTEST_OBJ) \
-o $@ $(LINKFLAGS) $(LDFLAGS) -l$(PROJECT) -Wl,-rpath,$(ORIGIN)/../lib
-o $@ $(LINKFLAGS) $(LDFLAGS) -l$(LIBRARY_NAME) -Wl,-rpath,$(ORIGIN)/../lib

$(TEST_CXX_BINS): $(TEST_BIN_DIR)/%.testbin: $(TEST_CXX_BUILD_DIR)/%.o \
$(GTEST_OBJ) | $(DYNAMIC_NAME) $(TEST_BIN_DIR)
@ echo LD $<
$(Q)$(CXX) $(TEST_MAIN_SRC) $< $(GTEST_OBJ) \
-o $@ $(LINKFLAGS) $(LDFLAGS) -l$(PROJECT) -Wl,-rpath,$(ORIGIN)/../lib
-o $@ $(LINKFLAGS) $(LDFLAGS) -l$(LIBRARY_NAME) -Wl,-rpath,$(ORIGIN)/../lib

# Target for extension-less symlinks to tool binaries with extension '*.bin'.
$(TOOL_BUILD_DIR)/%: $(TOOL_BUILD_DIR)/%.bin | $(TOOL_BUILD_DIR)
@ $(RM) $@
@ ln -s $(abspath $<) $@
@ ln -s $(notdir $<) $@

$(TOOL_BINS): %.bin : %.o | $(DYNAMIC_NAME)
@ echo CXX/LD -o $@
$(Q)$(CXX) $< -o $@ $(LINKFLAGS) -l$(PROJECT) $(LDFLAGS) \
$(Q)$(CXX) $< -o $@ $(LINKFLAGS) -l$(LIBRARY_NAME) $(LDFLAGS) \
-Wl,-rpath,$(ORIGIN)/../lib

$(EXAMPLE_BINS): %.bin : %.o | $(DYNAMIC_NAME)
@ echo CXX/LD -o $@
$(Q)$(CXX) $< -o $@ $(LINKFLAGS) -l$(PROJECT) $(LDFLAGS) \
$(Q)$(CXX) $< -o $@ $(LINKFLAGS) -l$(LIBRARY_NAME) $(LDFLAGS) \
-Wl,-rpath,$(ORIGIN)/../../lib

proto: $(PROTO_GEN_CC) $(PROTO_GEN_HEADER)
Expand Down Expand Up @@ -651,6 +671,8 @@ superclean: clean supercleanfiles
$(DIST_ALIASES): $(DISTRIBUTE_DIR)

$(DISTRIBUTE_DIR): all py | $(DISTRIBUTE_SUBDIRS)
# add proto
cp -r src/caffe/proto $(DISTRIBUTE_DIR)/
# add include
cp -r include $(DISTRIBUTE_DIR)/
mkdir -p $(DISTRIBUTE_DIR)/include/caffe/proto
Expand All @@ -661,6 +683,7 @@ $(DISTRIBUTE_DIR): all py | $(DISTRIBUTE_SUBDIRS)
# add libraries
cp $(STATIC_NAME) $(DISTRIBUTE_DIR)/lib
install -m 644 $(DYNAMIC_NAME) $(DISTRIBUTE_DIR)/lib
cd $(DISTRIBUTE_DIR)/lib; rm -f $(DYNAMIC_NAME_SHORT); ln -s $(DYNAMIC_VERSIONED_NAME_SHORT) $(DYNAMIC_NAME_SHORT)
# add python - it's not the standard way, indeed...
cp -r python $(DISTRIBUTE_DIR)/python

Expand Down
5 changes: 5 additions & 0 deletions Makefile.config.example
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,11 @@ PYTHON_INCLUDE := /usr/include/python2.7 \
# $(ANACONDA_HOME)/include/python2.7 \
# $(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include \

# Uncomment to use Python 3 (default is Python 2)
# PYTHON_LIBRARIES := boost_python3 python3.5m
# PYTHON_INCLUDE := /usr/include/python3.5m \
# /usr/lib/python3.5/dist-packages/numpy/core/include

# We need to be able to find libpythonX.X.so or .dylib.
PYTHON_LIB := /usr/lib
# PYTHON_LIB := $(ANACONDA_HOME)/lib
Expand Down
2 changes: 1 addition & 1 deletion cmake/Modules/FindMKL.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ caffe_option(MKL_MULTI_THREADED "Use multi-threading" ON IF NOT MKL_USE_SINGL

# ---[ Root folders
set(INTEL_ROOT "/opt/intel" CACHE PATH "Folder contains intel libs")
find_path(MKL_ROOT include/mkl.h PATHS $ENV{MKL_ROOT} ${INTEL_ROOT}/mkl
find_path(MKL_ROOT include/mkl.h PATHS $ENV{MKLROOT} ${INTEL_ROOT}/mkl
DOC "Folder contains MKL")

# ---[ Find include dir
Expand Down
2 changes: 1 addition & 1 deletion cmake/ProtoBuf.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ endif()

# place where to generate protobuf sources
set(proto_gen_folder "${PROJECT_BINARY_DIR}/include/caffe/proto")
include_directories(SYSTEM "${PROJECT_BINARY_DIR}/include")
include_directories("${PROJECT_BINARY_DIR}/include")

set(PROTOBUF_GENERATE_CPP_APPEND_PATH TRUE)

Expand Down
2 changes: 1 addition & 1 deletion cmake/Summary.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ function(caffe_print_configuration_summary)
caffe_status("")
caffe_status("******************* Caffe Configuration Summary *******************")
caffe_status("General:")
caffe_status(" Version : ${Caffe_VERSION}")
caffe_status(" Version : ${CAFFE_TARGET_VERSION}")
caffe_status(" Git : ${Caffe_GIT_VERSION}")
caffe_status(" System : ${CMAKE_SYSTEM_NAME}")
caffe_status(" C++ compiler : ${CMAKE_CXX_COMPILER}")
Expand Down
19 changes: 10 additions & 9 deletions cmake/Targets.cmake
Original file line number Diff line number Diff line change
@@ -1,16 +1,17 @@
################################################################################################
# Defines global Caffe_LINK flag, This flag is required to prevent linker from excluding
# some objects which are not addressed directly but are registered via static constructors
if(BUILD_SHARED_LIBS)
set(Caffe_LINK caffe)
else()
if("${CMAKE_CXX_COMPILER_ID}" STREQUAL "Clang")
set(Caffe_LINK -Wl,-force_load caffe)
elseif("${CMAKE_CXX_COMPILER_ID}" STREQUAL "GNU")
set(Caffe_LINK -Wl,--whole-archive caffe -Wl,--no-whole-archive)
macro(caffe_set_caffe_link)
if(BUILD_SHARED_LIBS)
set(Caffe_LINK caffe)
else()
if("${CMAKE_CXX_COMPILER_ID}" STREQUAL "Clang")
set(Caffe_LINK -Wl,-force_load caffe)
elseif("${CMAKE_CXX_COMPILER_ID}" STREQUAL "GNU")
set(Caffe_LINK -Wl,--whole-archive caffe -Wl,--no-whole-archive)
endif()
endif()
endif()

endmacro()
################################################################################################
# Convenient command to setup source group for IDEs that support this feature (VS, XCode)
# Usage:
Expand Down
5 changes: 3 additions & 2 deletions cmake/Utils.cmake
Original file line number Diff line number Diff line change
Expand Up @@ -346,10 +346,11 @@ function(caffe_parse_linker_libs Caffe_LINKER_LIBS_variable folders_var flags_va
elseif(lib MATCHES "^-l.*")
list(APPEND libflags ${lib})
elseif(IS_ABSOLUTE ${lib})
get_filename_component(name_we ${lib} NAME_WE)
get_filename_component(folder ${lib} PATH)
get_filename_component(filename ${lib} NAME)
string(REGEX REPLACE "\\.[^.]*$" "" filename_without_shortest_ext ${filename})

string(REGEX MATCH "^lib(.*)" __match ${name_we})
string(REGEX MATCH "^lib(.*)" __match ${filename_without_shortest_ext})
list(APPEND libflags -l${CMAKE_MATCH_1})
list(APPEND folders ${folder})
else()
Expand Down
2 changes: 1 addition & 1 deletion data/ilsvrc12/get_ilsvrc_aux.sh
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ cd $DIR

echo "Downloading..."

wget http://dl.caffe.berkeleyvision.org/caffe_ilsvrc12.tar.gz
wget -c http://dl.caffe.berkeleyvision.org/caffe_ilsvrc12.tar.gz

echo "Unzipping..."

Expand Down
23 changes: 7 additions & 16 deletions data/mnist/get_mnist.sh
Original file line number Diff line number Diff line change
Expand Up @@ -6,19 +6,10 @@ cd $DIR

echo "Downloading..."

wget --no-check-certificate http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz
wget --no-check-certificate http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz
wget --no-check-certificate http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz
wget --no-check-certificate http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz

echo "Unzipping..."

gunzip train-images-idx3-ubyte.gz
gunzip train-labels-idx1-ubyte.gz
gunzip t10k-images-idx3-ubyte.gz
gunzip t10k-labels-idx1-ubyte.gz

# Creation is split out because leveldb sometimes causes segfault
# and needs to be re-created.

echo "Done."
for fname in train-images-idx3-ubyte train-labels-idx1-ubyte t10k-images-idx3-ubyte t10k-labels-idx1-ubyte
do
if [ ! -e $fname ]; then
wget --no-check-certificate http://yann.lecun.com/exdb/mnist/${fname}.gz
gunzip ${fname}.gz
fi
done
50 changes: 50 additions & 0 deletions docker/Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
# A makefile to build the docker images for caffe.
# Two caffe images will be built:
# caffe:cpu --> A CPU-only build of caffe.
# caffe:gpu --> A GPU-enabled build using the latest CUDA and CUDNN versions.

DOCKER ?= docker

all: docker_files standalone

.PHONY: standalone devel

standalone: cpu_standalone gpu_standalone


cpu_standalone: standalone/cpu/Dockerfile
$(DOCKER) build -t caffe:cpu standalone/cpu

gpu_standalone: standalone/gpu/Dockerfile
$(DOCKER) build -t caffe:gpu standalone/gpu

docker_files: standalone_files

standalone_files: standalone/cpu/Dockerfile standalone/gpu/Dockerfile

FROM_GPU = "nvidia/cuda:cudnn"
FROM_CPU = "ubuntu:14.04"
GPU_CMAKE_ARGS = -DUSE_CUDNN=1
CPU_CMAKE_ARGS = -DCPU_ONLY=1

# A make macro to select the CPU or GPU base image.
define from_image
$(if $(strip $(findstring gpu,$@)),$(FROM_GPU),$(FROM_CPU))
endef

# A make macro to select the CPU or GPU build args.
define build_args
$(if $(strip $(findstring gpu,$@)),$(GPU_CMAKE_ARGS),$(CPU_CMAKE_ARGS))
endef

# A make macro to construct the CPU or GPU Dockerfile from the template
define create_docker_file
@echo creating $@
@echo "FROM "$(from_image) > $@
@cat $^ | sed 's/$${CMAKE_ARGS}/$(build_args)/' >> $@
endef


standalone/%/Dockerfile: templates/Dockerfile.template
$(create_docker_file)

52 changes: 52 additions & 0 deletions docker/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
# Caffe standalone Dockerfiles.

The `standalone` subfolder contains docker files for generating both CPU and GPU executable images for Caffe. The images can be built using make, or by running:

```
docker build -t caffe:cpu standalone/cpu
```
for example. (Here `gpu` can be substituted for `cpu`, but to keep the readme simple, only the `cpu` case will be discussed in detail).

Note that the GPU standalone requires a CUDA 7.5 capable driver to be installed on the system and [nvidia-docker] for running the Docker containers. Here it is generally sufficient to use `nvidia-docker` instead of `docker` in any of the commands mentioned.

# Running Caffe using the docker image

In order to test the Caffe image, run:
```
docker run -ti caffe:cpu caffe --version
```
which should show a message like:
```
libdc1394 error: Failed to initialize libdc1394
caffe version 1.0.0-rc3
```

One can also build and run the Caffe tests in the image using:
```
docker run -ti caffe:cpu bash -c "cd /opt/caffe/build; make runtest"
```

In order to get the most out of the caffe image, some more advanced `docker run` options could be used. For example, running:
```
docker run -ti --volume=$(pwd):/workspace caffe:cpu caffe train --solver=example_solver.prototxt
```
will train a network defined in the `example_solver.prototxt` file in the current directory (`$(pwd)` is maped to the container volume `/workspace` using the `--volume=` Docker flag).

Note that docker runs all commands as root by default, and thus any output files (e.g. snapshots) generated will be owned by the root user. In order to ensure that the current user is used instead, the following command can be used:
```
docker run -ti --volume=$(pwd):/workspace -u $(id -u):$(id -g) caffe:cpu caffe train --solver=example_solver.prototxt
```
where the `-u` Docker command line option runs the commands in the container as the specified user, and the shell command `id` is used to determine the user and group ID of the current user. Note that the Caffe docker images have `/workspace` defined as the default working directory. This can be overridden using the `--workdir=` Docker command line option.

# Other use-cases

Although running the `caffe` command in the docker containers as described above serves many purposes, the container can also be used for more interactive use cases. For example, specifying `bash` as the command instead of `caffe` yields a shell that can be used for interactive tasks. (Since the caffe build requirements are included in the container, this can also be used to build and run local versions of caffe).

Another use case is to run python scripts that depend on `caffe`'s Python modules. Using the `python` command instead of `bash` or `caffe` will allow this, and an interactive interpreter can be started by running:
```
docker run -ti caffe:cpu python
```
(`ipython` is also available in the container).

Since the `caffe/python` folder is also added to the path, the utility executable scripts defined there can also be used as executables. This includes `draw_net.py`, `classify.py`, and `detect.py`

Loading