-
Notifications
You must be signed in to change notification settings - Fork 18.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bugfix regarding #100 #103
Conversation
Because both the Forward and Backward methods use the bottom_vecs_ and top_vecs, I'm afraid there is no way to save memory. template <typename Dtype>
const vector<Blob<Dtype>*>& Net<Dtype>::ForwardPrefilled() {
for (int i = 0; i < layers_.size(); ++i) {
// LOG(ERROR) << "Forwarding " << layer_names_[i];
layers_[i]->Forward(bottom_vecs_[i], &top_vecs_[i]);
}
return net_output_blobs_;
}
template <typename Dtype>
const vector<Blob<Dtype>*>& Net<Dtype>::Forward(
const vector<Blob<Dtype>*> & bottom) {
// Copy bottom to internal bottom
for (int i = 0; i < bottom.size(); ++i) {
net_input_blobs_[i]->CopyFrom(*bottom[i]);
}
return ForwardPrefilled();
}
template <typename Dtype>
Dtype Net<Dtype>::Backward() {
Dtype loss = 0;
for (int i = layers_.size() - 1; i >= 0; --i) {
if (layer_need_backward_[i]) {
Dtype layer_loss = layers_[i]->Backward(
top_vecs_[i], true, &bottom_vecs_[i]);
loss += layer_loss;
}
}
return loss;
} |
There is no concern on memory consumption as long as you do not invoke All the memory chunks are lazy allocated, which is one of the beauty of Yangqing On Thu, Feb 20, 2014 at 8:32 PM, kloudkl notifications@github.com wrote:
|
The lazy beauties lie in SyncedMemory::to_cpu and SyncedMemory::to_gpu. |
@Yangqing thanks for the clarification, it seemed to me that it was using more memory but you are right it is not. |
I haven't experienced a core dump when exiting matlab after using the On Fri, Feb 21, 2014 at 10:12 AM, Sergio Guadarrama <
|
@rbgirshick I have double checked with the new #132 and don't get any more core dump. I guess it was probably because my branch was in a mixed state. But if you get any just let me know. |
bugfix regarding BVLC#100
Cleaner workaround for max pool
The bugfix for #100: when checking blobs_lr, also check the size of the parameter's blobs().size(): if size() is nonzero then we need to do backpropagation.
TODO: maybe add a regression test to rule out future bugs. Also the Init() function is growing quite big now.