Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Numerous Corrections to Hop Counts #47

Merged
merged 5 commits into from
Jan 31, 2013
Merged

Conversation

krkeegan
Copy link
Collaborator

These changes make a dramatic difference in my ability to scan all of my link tables with the fewest errors. I have also noticed that even regular day to day operations are must more efficient.

The changes basically:

  • enable a 0 hop count
  • increase send timeouts for peek related messages
  • ads a brief 50 millisecond pause after receiving a message before sending a new one

I have not included any code for programatically resetting the hop counts for devices. I do this manually with user code right now. I agree with Michael that some sort of running average might be a solution, but I haven't out an implementation of this that makes sense yet. In the mean time I am just resetting my hop count at 4am to 0, this seems to work.

Merging these changes shouldn't be held up while waiting for a solution to resetting the hop count.

The lowest hop count permitted by Insteon is 0 not 1.  A hop count of 0 means that the PLM will send the message, but no other devices are to repeat the message.  Generally devices near the PLM will not require any hopping.

-Fixed code to allow for setting a device's hop count to 0.
-Added message flags for hop counts of 0
-Set the initial hop count for new devices to 0

It is important to be judicious with increasing hop counts.  As hop counts increase, more messages are sent around the network which can lead to unintended interfence.  In addition, increasing the hop count increases response times.  This is because the responding device will not respond until all possible repeats of the message have occurred.  Even if a device hears the message on the first broadcast, it will wait for all hops to occur before responding to avoid collisions.
-Updated to allow for a hop_count of 0.  The comments in the file noted that a hop count of zero should exist, but never made allowances for it.

-Fixed the calculation used to determine the hop count.  Looks like a logical error, send_attempts was being added to default_hop_count and then decreased by one.  Not sure the logic here.  Default_hop_count is equal to send_attempts up to 3 attempts, so unless the device lacks a default_hop_count, which I believe is only the PLM, there is no need to look to send attempts.  Moreover, adding send_attempts and hop_count was resulting in an anomolous situation in which the timeout could be double what it should be.

-Shifted hop_count timeout allocations by one.  This matches what is in the comment section, but for some reason was not implemented.
If a message arrives with hops left, then it may arrive again on the next hop.  In order to avoid collisions and to avoid confusing the logic on MH delay sending the next message just slightly to allow for any remaining hops to occur.
-Decreased timeout for messages received with hops_left.

-Added a slight 50 millisecond pause between receiving messages and sending the next message.  This allows a brief moment for duplicate messages to arrive.  I receive a number of duplicate messages with the same hop count.  I believe these are caused by bridging between the powerline and rf signals, apparently this doesn't decrease the hop count.  This slight pause of 50 milliseconds allows the second message to arrive, which as long as the next message has not been sent yet is gracefully ignored.
The timing of peek related messages is crucial.  In order to prevent receiving messages out of order, we need to increase the send timeout dramatically.
hollie added a commit that referenced this pull request Jan 31, 2013
Numerous Corrections to Hop Counts
@hollie hollie merged commit f254e79 into hollie:master Jan 31, 2013
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants