-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using pyrealsense2 functions with ROS subscribers #1711
Comments
Hi @LeeAClift My research for your question showed that other Gazebo users who were asking questions about this subject had the idea to use get_distance() because it was how they would do it with a physical camera in pyrealsense2. They typically ended up being recommended to use other methods though. As an example, the link below provides scripting for setting up a ROS subscriber to the /camera/depth/image_rect_raw topic to obtain the depth frame. https://stackoverflow.com/questions/62938146/getting-realsense-depth-frame-in-ros Another Gazebo user using a RealSense camera subscribed to /camera/depth/image_raw to obtain an image that they could index with their coordinates of interest and obtain depth in mm. |
Hi Marty, thanks for your quick response. This is a similar conclusion to what I came up with. Assuming that neither of the functions I am already using would work, do you have any suggestions for an alternative to rs2_deproject_pixel_to_point, as that is very much the foundation of my existing code. |
I conducted further research about your question above regarding an alternative to using rs2_deproject_pixel_to_point, presumably with a Gazebo simulation. I located a tutorial for publishing simulated Gazebo sensor data to RViz via RealSense URDF. Does this fit what you had in mind, please? https://roboticsknowledgebase.com/wiki/tools/gazebo-simulation/ Whilst Gazebo excels at simulating robot projects, if your main goal is just to simulate a RealSense device (without it being part of a robot) then an alternative may be to create a simulated camera in the librealsense SDK with software_device. https://github.com/IntelRealSense/librealsense/tree/master/examples/software-device Available information about implementing it in Python is limited, though the scripting in the link below may be helpful. |
Hi Marty, Thank you once again for your reply. I have followed a similar tutorial to the one you have linked, which allowed me to get a URDF simulation of a D435i into Gazebo, which can successfully publish data to RViz, including both colour and depth data. My goal from here is to find a way to take that data and give me an end result similar to rs2_deproject_pixel_to_point, where I am able to input a pixel pair and receive its real-world coordinates. I have been investigating some other ROS packages which could do this, such as depth_image_proc/point_cloud_xyz, although I am struggling immensely. Your continued assistance is very much apreciated. |
My recollection is that once a ROS point cloud has been generated then XYZ coordinates can be obtained with depth_image_proc, as you suggest. Here is an example link. https://answers.ros.org/question/310996/how-to-get-xyz-and-rgb-of-each-pixel-from-a-sensormsg-image/ Another article provided Python scripting for obtaining XYZ from sensor_msgs.PointCloud2: |
Thanks for these links Marty, I will look over them tomorrow now, as its getting quite late here, and report back. From a quick glance, the first link is what I have been trying today, and I'd imagine with enough time I should be able to crack it. The second link looks very interesting, and I will look at implementing that, as there is already depth data in that form being published by Gazebo. |
Thanks very much for the update - good luck! |
Hi Marty, Just an update, your second link (the python code) works well and gives out the co-ordinates as needed, as seen in my screenshot. I would like to thank you for all your quick responses and help, you've gone above and beyond! I'm closing this thread now, but for anyone else who needs to get similar effects to either get_distance or rs2_deproject_pixel_to_point working on Ubuntu 18 and ROS Melodic, here's my complete summary and current workflow:
|
You're very welcome @LeeAClift - thanks so much for sharing your detailed method and code with the RealSense community :) |
Hi, sorry if this is a stupid question. I am currently trying to convert some of my code from using a physical camera to a simulated camera. I have successfully simulated a camera in Gazebo, but I now need to convert my code from using a pipeline to using ROS subscribers.
Ideally, the two functions my code currently depends on are get_distance and deproject_pixel_to_point.
Is there a simple way to get these functions to work with ros subscribes, and not a pipeline?
I had assumed it would be something to do with /camera/depth/, but I am unsure if that is correct, and how to go about doing it.
Any kind of help would be invaluable.
The text was updated successfully, but these errors were encountered: