Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Data queues, prefetching and multi-source #1933

Closed
wants to merge 1 commit into from

Conversation

cypof
Copy link
Member

@cypof cypof commented Feb 21, 2015

(Moved from dev branch (#1775) to master.)

I split the work on data_layer from #1148. It was written initially to get enough bandwidth to feed multiple GPUs and fix performance issues with the thread creation/destruction on each batch. Over time a few other things got in. In particular we are experimenting at Flickr with different ratios of classes by reading from multiple sources. E.g. each dataset can be setup to contain one class, and the probability of each source defines the class ratios at runtime.

In terms of performance, the current code could be fast enough, but it's hard to evaluate. If many solvers open the same DB and read, only the first one will actually load data, the other ones read from the cache. For parallel training, each solver needs to see a different batch, so either we split the dataset in several DBs, or use large initial offsets in the same DB and hope they won't catch up with each other. If the offset if large, data might not be in cache anymore when the next solver reaches the same location, requiring the disk to seek back and forth. Seeking kills mechanical disks performance. Using SSD helps but now the dataset might not fit and you need multiple sources. This PR tries to answer these different problems.

Features:

  • Multiple solvers read from a single queue. This makes sure they see different examples, and the source is accessed sequentially.
  • Reading from multiple sources, in case one network location or disk is not fast enough to feed all solvers, or contain the whole dataset. Each source can read from a shard, or a copy of the same dataset with a random offset.
  • Probabilities on sources, e.g. to change the ratio of positive/negative when doing binary classification. It is also useful to balance reads between sharded sources. If one is faster than another, some examples might used more often than others, which would change SGD behavior. Setting probabilities on sources, inverse to their size, ensures a balanced coverage.
  • One loading thread per database, even if multiple solvers are running. Needed for single threaded DBs like LevelDB, and to ensure sequential access. In almost all cases one thread is enough for loading speed as it doesn't do anything else. There is still a transform thread for each solver like today.
  • No thread creation/deletion per batch. It's inefficient and it causes problems with components that rely on thread-local caching. We also had problem with memory pinning and virtual memory. C.f. @thatguymike
  • Prefetch asynchronously to each GPU on a separate CUDA stream, so that the batch is already on the GPU when the solver needs it.
  • Prefetch a configurable number of batches in host memory to erase bandwidth glitches, in particular if data is loaded from a network it might make sense to configure a large prefetch queue.

@cypof
Copy link
Member Author

cypof commented Mar 2, 2015

@shelhamer I have a proto of the socket_layer, do you prefer it added to this PR or separate? Also do you guys plan to merge this one soon, or prefer to wait until we make more progress on P2P?

@cypof
Copy link
Member Author

cypof commented Mar 25, 2015

It seems on RedHat Linux in some cases only one core is used. I suspect an opencv compile option, as it was only on encoded images, but haven't had time to investigate. On Ubuntu 14.04 things work well, on imagenet we get about 1500 images/s with 8 GPUs and prefetch threads.

cypof added a commit to cypof/caffe that referenced this pull request Apr 22, 2015
cypof added a commit to cypof/caffe that referenced this pull request Apr 23, 2015
@shelhamer
Copy link
Member

Replaced by #2366 #2367 #2368 for review and merge although multi-source still needs splitting.

@shelhamer shelhamer closed this Apr 27, 2015
@shelhamer shelhamer removed the focus label Apr 27, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants