Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minor changes #501

Merged
merged 5 commits into from
Jul 19, 2021
Merged

Minor changes #501

merged 5 commits into from
Jul 19, 2021

Conversation

mahbodnr
Copy link
Contributor

A few suggestions:

  • Nodes: When traces_additive is False, spike trace (self.x) would be filled with 1 at spike points. However, it is not always the case (For example in Izhikevich 2007, this value is set to 0.1). I think the 1 is better to be replaced by the trace_scale value, so users can set this value as well.
  • Nodes: typo: self.tau
  • Learning: the default value for nu is set to None, which makes the PostPre learning rule practically wasteful. I believe it is not a good default value for an argument and it is better to be replaced by some float numbers or get changed to a mandatory argument.
  • Network: the format of the inputs argument of the run method is not very common and since the input in most of the deep learning frameworks (mainly TensorFlow and Pytorch) is in the type of tensors, it can be tricky especially for new users. Furthermore, since there is no explicit error to point out this problem, it could be sometimes hard to find and a time-consuming error. Adding an assert to check the type of inputs can be helpful in this regard.
  • Network: A progress bar can be useful in very long runs A custom progress bar (including some information about the run and the network) can be added later.
  • Learning: replace torch.ger with torch.outer (https://pytorch.org/docs/stable/generated/torch.ger.html)
  • models: typo: inhibitory

@Hananel-Hazan

@SimonInParis
Copy link
Collaborator

Thanks a lot.
-Nodes - traces_additive: Makes sense. Agreed.
-Nu default value: this is very model dependent, it's not obvious to compare with TensorFlow/keras here. There is no global answer by default.
-Network input spikes format assert: yes, nice.
-Progress bar: no, the call to .run() is usually very short (less than 0.1s in most cases). It would also add another dependency to Bindsnet. It's user duty to use it or not.
-torch.ger: yes, thanks for the info!
-typos: Thanks again!

@mahbodnr
Copy link
Contributor Author

Thank you, @SimonInParis

  • I think I need to elaborate on my point on the default value of nu. I totally agree that the value of nu is highly model-dependent, nevertheless, I still believe None is not a good default value and must somehow be changed. My point is, most users usually do not change the default values unless there is a certain need for it. hence, many users may add a learning rule to their networks without knowing their network is not changing at all, therefore, this default value is misleading and not user-friendly. I believe it is not only a bad idea to set default nu to zero, but also users must be forbidden to use 0 as a value for nu (there must be an assert to check if the nu value is set to a non-zero value). I mean, why someone would add a learning rule and then set the learning rate to 0? what is the point in that? The learning rule is literally doing nothing in that case. For the default value, despite the fact that the amount of nu is dependent on the problem, a value used on a valid source can be a fine choice. Also, it can be at least a mandatory argument, so users have to set it themselves. At last, if you do not agree with any of these suggestions, I believe there must be at least a warning to arise whenever this happens and inform the user that their learning rule is not working and they are practically wasting their time.
  • About the progress bar, actually, it occurred to me when I was trying to train a very long time series. Yet, I agree this does not happen that often. Do you think it is worth it if I write a progress bar instead of tqdm or it is not a good idea in general?

@Hananel-Hazan
Copy link
Collaborator

@mahbodnr, Thank you for your help

I agree, using STDP with the default value needs to raise a warning to the user that nu is zero and as a result, there are no weight changes. Same for nu = 0 or nu = [0,0]. However, Simon is correct, zero one side of nu is acceptable behavior, for example, nu = [0, 0.2] when you want to use only one part of the STDP rule. In any case, these checks and warnings, need to be at the initialization part of the object, not the compute part.

About tqdm at the network level, adding it as a default can complicate users that would like to nest tqdm in the epoch level, train level, etc. tqdm is a nice addition that needs to be used carefully and aesthetically depend on the task and user preferences.

Please fix the issues above and I will accept the PR.

@mahbodnr
Copy link
Contributor Author

mahbodnr commented Jul 3, 2021

@Hananel-Hazan Sure, thanks.

@mahbodnr
Copy link
Contributor Author

mahbodnr commented Jul 3, 2021

@Hananel-Hazan done.

@Hananel-Hazan
Copy link
Collaborator

Thanks @mahbodnr for your help

@Hananel-Hazan Hananel-Hazan merged commit 91ffafb into BindsNET:master Jul 19, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants