-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
convert selected 2D pixels to 3D points using pyrealsense #12065
Comments
The RealSense SDK provides support for basic pointcloud generation but for advanced pointcloud features it is recommended to combine the RealSense data with a specialized external pointcloud library such as PCL or Open3D, as the RealSense SDK does not provide similar in-built features. In regard to object detection, numerous RealSense users make use of YOLO, sometimes also with ROS tools such as darknet_ros for 2D bounding boxes and darknet_ros_3d for 3D bounding boxes - see IntelRealSense/realsense-ros#2311 (comment) The rs_aligned_depth.launch launch file is very old, so I would recommend using the newer rs_camera launch file and enabling alignment in the launch with align_depth:
Regarding convert_depth_to_phys_coord_using_realsense, I am not familiar enough with this code to offer advice on why you are experiencing an orientation problem with it, unfortunately. Though RealSense ROS cameras do use a different axis system to the ROS default of Z = up and Y = forward. For RealSense cameras, the axis system is Z = forward and Y = up. The RealSense ROS wrapper converts the RealSense axis system to the ROS standard axis system. |
Thank You for your response. I will look more into YOLO and darknet_ros in depth for answers to my queries . |
Hey @MartyG-RealSense ,I was able to solve the above problem by using intel realsense d435 urdf file and then publishing the camera points data with the right header: def publish_depth_points(points):
fields = [PointField('x', 0, PointField.FLOAT32, 1),
PointField('y', 4, PointField.FLOAT32, 1),
PointField('z', 8, PointField.FLOAT32, 1),
PointField('intensity', 12, PointField.FLOAT32, 1)]
header = Header()
#This ensures that the points are published wrt to present the location and orientation of the camera.
header.frame_id = "camera_color_optical_frame"
# header.frame_id = "camera_color_frame"
header.stamp = rospy.Time.now()
pc2 = point_cloud2.create_cloud(header, fields, points)
pub.publish(pc2) |
It's excellent news that you achieved a solution, @mistimohan - thanks so much for sharing it! As you have solved your issue, I will close this case. Thanks again! |
Originally posted by @MartyG-RealSense in #11788 (comment)
Quite new to github,so please ignore any of my trivial mistakes
Hey sorry for the late response as I have been stuck with another area of the project. I tried the method rs2_project_color_pixel_to_depth_pixel suggested by you but it only provides me depth pixel(I am assuming x and y) not the 3d point .I am doing object detection on rgb image and I need to filter out (/or calculate ) the point cloud of only objectAlso to clearify,my requirement is : **I only need a part of whole point cloud based on desired object in rgb image **rather than whole point cloud using pc.calculate(depth_frame) .So I need a method that converts a set of 2d pixels (or masked rgb/depth image) to a 3d point in space using pyrealsense.
I was hoping that Realsense could or should provide this basic masked rgb image to point cloud conversion as Realsense is being used extensively for object detection and processing.Also why there is a change in orientation compared to real image when I am using convert_depth_to_phys_coord_using_realsense to create point cloud ?
The above image is when i use :roslaunch realsense2_camera rs_aligned_depth.launch
Above image is when i use my code :
/def get_3d_coord(depth_image,camera_info_msg,x1=0,y1=0,x2=480,y2=640):
points=[]
for i in np.arange(x1,x2,4):
for j in np.arange(y1,y2,4):
The text was updated successfully, but these errors were encountered: