Skip to content

idsia-robotics/Collaborative-Monte-Carlo-Localization

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Resource-Aware Collaborative Monte Carlo Localization with Distribution Compression

This repository contains the implementation of the following publication:

@misc{zimmerman2024arxiv,
      title={{Resource-Aware Collaborative Monte Carlo Localization with Distribution Compression}}, 
      author={Nicky Zimmerman and Alessandro Giusti and Jérôme Guzzi},
      year={2024},
      eprint={2404.02010},
      archivePrefix={arXiv},
      primaryClass={cs.RO}
}

Under review for IROS 2024. Our live demo:

DEMO

Abstract

Global localization is essential in enabling robot autonomy, and collaborative localization is key for multi-robot systems. In this paper, we address the task of collaborative global localization under computational and communication constraints. We propose a method which reduces the amount of information exchanged and the computational cost. We also analyze, implement and open-source seminal approaches, which we believe to be a valuable contribution to the community.
We exploit techniques for distribution compression in near-linear time, with error guarantees. We evaluate our approach and the implemented baselines on multiple challenging scenarios, simulated and real-world. Our approach can run online on an onboard computer. We release an open-source C++/ROS2 implementation of our approach, as well as the baselines.

Results

We conducted a thorough evaluation of the different approaches to cooperative localization. We present our experiments to show the capabilities of our method, Compress++ MCL. The results support the claims that our proposed approach (i) improves collaborative localization, (ii) decreases the required band-width, (iii) reduces the computational load, (iv) runs online on an onboard computer.

Installation

We provide Docker installations for ROS 2 Humble. Make sure you installed Docker Engine and NVidia container so you can run Dokcer with GPU support. Download the external resources in the host machine into the /ros_ws/src directory

git clone git@github.com:jeguzzi/robomaster_ros.git

The package is necessarily for running the demo as the rosbag includes RoboMaster specific detection messages. However, they are not strictly necessary for any of the approaches, and the dependency can be removed with a bit of work on the ROS wrapper. In the root directory, run

sudo make humble=1

To build the Docker. To enable RVIZ visualization from the Docker run in the host machine

 xhost +local:docker

To run the Docker

sudo make run humble=1

The core code is C++ with no ROS dependency. To build the ncore library in the docker, run

cd src/cmcl/cmcl_ros/ncore && mkdir build && cd build && cmake .. -DBUILD_TESTING=1 && make -j12 
./bin/UnitTests

Run the unit tests to verify the installation is successful. Then to build the ROS 2 wrapper

cd /ros_ws && . /opt/ros/humble/setup.bash &&  colcon build && . install/setup.bash

Running the Algo

You can download the rosbag from here. place it in the /ros_ws/rosbags directory and you can use the Demo.sh script to run it. Remember to enable RVIZ in your docker.

MCLNode

The main node which is a ROS 2 wrapper to the C++ implementation of the different approaches. Requires as input a minimum of 2 topics, Odometry and PointCloud2, for the single robot range-based MCL. To enable collaborative localization, RobotDetection messages are needed. These messages are agnostic to detection algorithm and if you wish to use your own detection pipeline, just write a ROS node that publishes these messages. The topic names are configurable through the launch file. This runs the correct and predict steps asynchronously, so correct is executed whenever we have odometry and predict is executed when an observation arrives. MCLNodE publishes PoseWithCovarianceStamped messages with the pose prediction. To see the predictions in RVIZ, run

ros2 launch cmcl_ros cmcl_real.launch

LidarScan2PointCloudNode

This nodes converts the LaserScan messages from our YDLIDAR Tmini Pro 2D LiDAR to a point cloud in base_link frame.

DetectionNoiseNode

This node is used to add noise to "perfect" detections generated by our simulation, to make the scenarios more realistic.

UVDetection2WorldNode

This node takes as input the RoboMaster Detection messages, which are 2D bounding boxes in image coordinates, and provides a 3D position of the detected robot in the base_link frame. The distance to the detected robot is computed using the known dimensions of the RoboMaster and the 2D bounding boxes.

EvalNode

This node was used to log the localization predictions for the evluation.

Releases

No releases published

Packages

No packages published

Languages