Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Publishers and Subscribers cause memory allocations in nodes where they are not matched by any local reader #172

Open
alsora opened this issue Apr 8, 2019 · 0 comments

Comments

@alsora
Copy link
Contributor

alsora commented Apr 8, 2019

Bug report

Required Info:

  • Operating System:
    • Ubuntu 18.04
  • Version or commit hash:
    • Crystal patch3 and master
  • DDS implementation:
    • Fast-RTPS and OpenSplice
  • Client library (if applicable):
    • rclcpp

Steps to reproduce issue

Run in different terminals

ros2 run examples_rclcpp_minimal_subscriber subscriber_lambda
ps aux | grep subscriber_lambda
ros2 run examples_rclcpp_minimal_client client_main
ps aux | grep subscriber_lambda

Expected behavior

The memory used by the subscriber_lambda process does not change while creating an additional node that does not share any pub/sub/client/service with it.

Actual behavior

The memory used by the subscriber_lambda process increases.

Before:
RSS: 21192 VRT: 433292

After:
RSS: 21452 VRT: 498828

The increase in memory is proportional to the number of pub/sub/client/service created in the new node.
For example, slightly changing the code for examples_rclcpp_minimal_client client_main in order to create 12 clients instead of only 1 causes the following:

After:
RSS: 21720 VRT: 498828

This behavior is observed with both Fast-RTPS 1.7.0, 1.7.2 and OpenSplice (version shipped with Crystal patch3).
Note that with OpenSplice, this should not happen if nodes are run in the same process.

Moreover, creating a FastRTPS example and simulating this behavior, there is only an increase in VRT.

Feature request

It looks like the RMW is allocating some memory whenever a new end-point is discovered.
However, this aggressive approach does not scale well (especially if every node contains a ParameterServer and a ParameterClient).

For example, in Fast-RTPS v1.7.0, considering a single process application with multiple nodes.
If we have 1 node and we add a second one, the memory overhead is approximately 10Mb.
If we have 20 nodes and we add a 21st one, the memory overhead is 23Mb.

I guess that the reason for this behavior is the following scenario:
Node A publishes to topic 1.
Node B is created and discovers Node A and its publisher.
Later, Node B creates a subscriber to topic 1 and it can immediately receive messages as everything is already set-up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant