Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using pyrealsense2 functions with ROS subscribers #1711

Closed
LeeAClift opened this issue Feb 19, 2021 · 9 comments
Closed

Using pyrealsense2 functions with ROS subscribers #1711

LeeAClift opened this issue Feb 19, 2021 · 9 comments
Labels

Comments

@LeeAClift
Copy link

Hi, sorry if this is a stupid question. I am currently trying to convert some of my code from using a physical camera to a simulated camera. I have successfully simulated a camera in Gazebo, but I now need to convert my code from using a pipeline to using ROS subscribers.

Ideally, the two functions my code currently depends on are get_distance and deproject_pixel_to_point.

Is there a simple way to get these functions to work with ros subscribes, and not a pipeline?

I had assumed it would be something to do with /camera/depth/, but I am unsure if that is correct, and how to go about doing it.

Any kind of help would be invaluable.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Feb 19, 2021

Hi @LeeAClift My research for your question showed that other Gazebo users who were asking questions about this subject had the idea to use get_distance() because it was how they would do it with a physical camera in pyrealsense2. They typically ended up being recommended to use other methods though.

As an example, the link below provides scripting for setting up a ROS subscriber to the /camera/depth/image_rect_raw topic to obtain the depth frame.

https://stackoverflow.com/questions/62938146/getting-realsense-depth-frame-in-ros

Another Gazebo user using a RealSense camera subscribed to /camera/depth/image_raw to obtain an image that they could index with their coordinates of interest and obtain depth in mm.

IntelRealSense/librealsense#6847

@LeeAClift
Copy link
Author

Hi Marty, thanks for your quick response.

This is a similar conclusion to what I came up with. Assuming that neither of the functions I am already using would work, do you have any suggestions for an alternative to rs2_deproject_pixel_to_point, as that is very much the foundation of my existing code.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Feb 19, 2021

I conducted further research about your question above regarding an alternative to using rs2_deproject_pixel_to_point, presumably with a Gazebo simulation. I located a tutorial for publishing simulated Gazebo sensor data to RViz via RealSense URDF. Does this fit what you had in mind, please?

https://roboticsknowledgebase.com/wiki/tools/gazebo-simulation/


Whilst Gazebo excels at simulating robot projects, if your main goal is just to simulate a RealSense device (without it being part of a robot) then an alternative may be to create a simulated camera in the librealsense SDK with software_device.

https://github.com/IntelRealSense/librealsense/tree/master/examples/software-device

Available information about implementing it in Python is limited, though the scripting in the link below may be helpful.

IntelRealSense/librealsense#7057

@LeeAClift
Copy link
Author

Hi Marty,

Thank you once again for your reply.

I have followed a similar tutorial to the one you have linked, which allowed me to get a URDF simulation of a D435i into Gazebo, which can successfully publish data to RViz, including both colour and depth data.

My goal from here is to find a way to take that data and give me an end result similar to rs2_deproject_pixel_to_point, where I am able to input a pixel pair and receive its real-world coordinates.

I have been investigating some other ROS packages which could do this, such as depth_image_proc/point_cloud_xyz, although I am struggling immensely. Your continued assistance is very much apreciated.

@MartyG-RealSense
Copy link
Collaborator

My recollection is that once a ROS point cloud has been generated then XYZ coordinates can be obtained with depth_image_proc, as you suggest. Here is an example link.

https://answers.ros.org/question/310996/how-to-get-xyz-and-rgb-of-each-pixel-from-a-sensormsg-image/

Another article provided Python scripting for obtaining XYZ from sensor_msgs.PointCloud2:

https://www.programmersought.com/article/9245461951/

@LeeAClift
Copy link
Author

Thanks for these links Marty, I will look over them tomorrow now, as its getting quite late here, and report back.

From a quick glance, the first link is what I have been trying today, and I'd imagine with enough time I should be able to crack it.

The second link looks very interesting, and I will look at implementing that, as there is already depth data in that form being published by Gazebo.

@MartyG-RealSense
Copy link
Collaborator

Thanks very much for the update - good luck!

@LeeAClift
Copy link
Author

Hi Marty, Just an update, your second link (the python code) works well and gives out the co-ordinates as needed, as seen in my screenshot.
Screenshot from 2021-02-20 12-17-15

I would like to thank you for all your quick responses and help, you've gone above and beyond!

I'm closing this thread now, but for anyone else who needs to get similar effects to either get_distance or rs2_deproject_pixel_to_point working on Ubuntu 18 and ROS Melodic, here's my complete summary and current workflow:

  1. Get intel Realsense gazebo camera description and files from these forks
    https://github.com/issaiass/realsense_gazebo_plugin
    https://github.com/issaiass/realsense2_description
    These can be launched via the launch files in the description fork, they should boot both gazebo and rviz.

  2. To get_distance, the normal function doesn't work as it relies on a pipeline being active, so an alternative is needed, in this case, I use this python code:

#!/usr/bin/env python

import rospy
from cv_bridge import CvBridge, CvBridgeError
from sensor_msgs.msg import Image
import numpy as np

def convert_depth_image(ros_image):
    bridge = CvBridge()
     # Use cv_bridge() to convert the ROS image to OpenCV format
    try:
     #Convert the depth image using the default passthrough encoding
        depth_image = bridge.imgmsg_to_cv2(ros_image, desired_encoding="passthrough")
        depth_array = np.array(depth_image, dtype=np.float32)
        center_idx = np.array(depth_array.shape) / 2
        print ('center depth:', depth_array[center_idx[0], center_idx[1]])

    except CvBridgeError, e:
        print e
     #Convert the depth image to a Numpy array


def pixel2depth():
    rospy.init_node('pixel2depth',anonymous=True)
    rospy.Subscriber("/camera/depth/image_raw", Image,callback=convert_depth_image, queue_size=1)
    rospy.spin()

if __name__ == '__main__':
    pixel2depth()
  1. Finally, to get all the X, Y and Z coordinates, you can use this bit of python code, which subscribes to the points being published to Rviz, and prints the real-world X, Y and Z. X going positive to the right from centre, Y going Positive down from the centre, and Z going positive out towards what the camera is pointing at.
#! /usr/bin/env python

from sensor_msgs.msg import PointCloud2
from sensor_msgs import point_cloud2
import rospy
import time

def callback_pointcloud(data):
    assert isinstance(data, PointCloud2)
    gen = point_cloud2.read_points(data, field_names=("x", "y", "z"), skip_nans=True)
    time.sleep(1)
    print type(gen)
    for p in gen:
      print " x : %.3f  y: %.3f  z: %.3f" %(p[0],p[1],p[2])

def main():
    rospy.init_node('pcl_listener', anonymous=True)
    rospy.Subscriber('/camera/depth/color/points', PointCloud2, callback_pointcloud)
    rospy.spin()

if __name__ == "__main__":
    main()

@MartyG-RealSense
Copy link
Collaborator

You're very welcome @LeeAClift - thanks so much for sharing your detailed method and code with the RealSense community :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants