-
Notifications
You must be signed in to change notification settings - Fork 18.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Explicitly disable backward propagation of a layer for controlled fine tuning #389
Comments
@jeffdonahue has an improved backward interface in the works. Jeff, how Le lundi 5 mai 2014, kloudkl notifications@github.com a écrit :
Evan Shelhamer |
I believe that you can already do this in Caffe by setting |
Right. What I'm suggesting is a field not for weight blobs but for bottoms to act as a vector of flags, one per bottom, to dictate whether backpropagation should continue to that bottom. If it overcomplicates the logic we can leave it as an issue for now. |
Closing since this is already supported by |
If |
It does prevent all the unnecessary computation. It's not a hack at all. See On Wed, Aug 13, 2014 at 9:26 AM, Alexandre Dalyac notifications@github.com
|
ah ok, sorry guys. nice job on keeping the UI simple then! |
The Google video classification CNN explored four transfer learning methods training from scratch, fine-tuning top layer (classifier), fine-tuning top 3 layers, and fine-tuning all layers [1]. Fine-tuning specific layers keeps the generic features of the other layers untouched during training. They found that fine-tuning top 3 layers performed best.
It is not very straightforward to reason about whether the backward propagation of a layer is disabled or not in Caffe as shown in #100 and #103. So it would be nice to be able explicitly disable that.
[1] Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, Li Fei-Fei. Large-Scale Video Classification with Convolutional Neural Networks. CVPR 2014.
The text was updated successfully, but these errors were encountered: