Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can you please explain how the network learns 'm' prototypes (initially)? #13

Open
FahadMahdi opened this issue Jul 27, 2021 · 3 comments

Comments

@FahadMahdi
Copy link

After the training is completed, I understand the process. However, I cannot understand how the prototypes are learned at the beginning? Is it user defined? The paper doesn't say so.

@giangnguyen2412
Copy link

giangnguyen2412 commented Sep 26, 2021

I have the same question :D it seems to be very important but how the prototypical parts were learned may be neglected. @cfchen-duke can you give us a high-level intuition? I think this is explained in the paper but not easy to understand and a high-level explanation is very useful. Thank you.

@kretes
Copy link

kretes commented Oct 4, 2021

After reading the paper I understood that the prototypes are 'learned', and so the initialization is done randomly.
In the supplement Figure 10: Overview of training algorithm this is described as ∀j: prototype pj ←Uniform([0,1]H1×W1×D);
Looking at the model code I found this https://github.com/cfchen-duke/ProtoPNet/blob/master/model.py#L105 which confirms this hypothesis.

@boileddd
Copy link

self.prototype_vectors = nn.Parameter(torch.rand(self.prototype_shape), requires_grad=True)
line 105 in model.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants