Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

software_device usage on python #7057

Closed
kouta-kun opened this issue Aug 7, 2020 · 9 comments
Closed

software_device usage on python #7057

kouta-kun opened this issue Aug 7, 2020 · 9 comments

Comments

@kouta-kun
Copy link

  • Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):

  • All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)


Required Info
Camera Model D435
Firmware Version (05.11.01.100 on dev camera but shouldn't apply)
Operating System & Version Arch Linux rolling-release
Kernel Version (Linux Only) 5.7.7
Platform PC
SDK Version 2.35.2
Language python
Segment Desktop

Issue Description

Hello, we are currently trying to send depth frames from a remote device (jetson nano) to a centralized server in order to apply alignment/other filters there, as our edge device seems to be too slow for our use case. I have tried to use software_device as follows:

import pickle

import cv2
import pyrealsense2 as rs
import numpy as np

SIMULATED_SN = "1112"

ctx = rs.context()

soft_dev = rs.software_device()
soft_dev.register_info(rs.camera_info.serial_number, SIMULATED_SN)
soft_dev.register_info(rs.camera_info.advanced_mode, "YES")
soft_dev.register_info(rs.camera_info.debug_op_code, "15")
soft_dev.register_info(
    rs.camera_info.firmware_version, "05.10.13.00\n255.255.255.255"
)
soft_dev.register_info(rs.camera_info.name, "Intel RealSense D435 (Emulated)")
soft_dev.register_info(rs.camera_info.physical_port, "/no/path")
soft_dev.register_info(rs.camera_info.product_id, "0B3A")
soft_dev.register_info(
    rs.camera_info.recommended_firmware_version, "05.10.03.00"
)
soft_dev.register_info(rs.camera_info.usb_type_descriptor, "3.2")

depth_sensor: rs.software_sensor = soft_dev.add_sensor("Depth")

intrinsics = rs.intrinsics()

intrinsics_file = open("intrinsics.pkl", "rb")
intrinsics_dictionary = pickle.load(intrinsics_file)

for k in intrinsics_dictionary:
    setattr(intrinsics, k, intrinsics_dictionary[k])

depth_stream = rs.video_stream()
depth_stream.type = rs.stream.depth
depth_stream.width = intrinsics.width
depth_stream.height = intrinsics.height
depth_stream.fps = 30
depth_stream.bpp = 2
depth_stream.fmt = rs.format.z16
depth_stream.intrinsics = intrinsics
depth_stream.index = 0
depth_stream.uid = 1312

depth_profile = depth_sensor.add_video_stream(depth_stream)

soft_dev.add_to(ctx)

print(list(ctx.query_devices()))

config = rs.config()
# config.disable_all_streams()
config.enable_device(SIMULATED_SN)
config.enable_stream(rs.stream.depth, 0, intrinsics.width, intrinsics.height, rs.format.z16, 30)

pipe = rs.pipeline(ctx)
prof = pipe.start(config)

dstream = prof.get_stream(rs.stream.depth)
dstream_prof = dstream.as_video_stream_profile()

colorizer = rs.colorizer()

for i in [f for f in os.listdir() if 'npy' in f]:
    print('loading frame i')
    depth_npy = np.load(i, mmap_mode='r')

    vid_frame = rs.software_video_frame()
    vid_frame.stride = depth_stream.width * depth_stream.bpp
    vid_frame.bpp = depth_stream.bpp
    vid_frame.timestamp = 0.0
    vid_frame.pixels = depth_npy
    vid_frame.domain = rs.timestamp_domain.hardware_clock
    vid_frame.frame_number = int(i.split('.')[0])
    vid_frame.profile = dstream_prof
    vid_frame.pixels = depth_npy
    print('created software frame')

    depth_sensor.on_video_frame(vid_frame)
    print('passed to depth_sensor')
    frames = pipe.wait_for_frames()
    depth_frame = frames.get_depth_frame()
    depth_frame = colorizer.colorize(depth_frame)
    npy_frame = np.asanyarray(depth_frame.get_data())
    cv2.imshow("colorized", npy_frame)
    cv2.waitKey(1000//60)

However, this results in the following error:

[<pyrealsense2.device: Software-Device
Intel RealSense D435 (Emulated) (S/N: 1112)>]
Traceback (most recent call last):
  File "/home/kouta/PycharmProjects/rs-postprocessing-remotely/after_process.py", line 60, in <module>
    prof = pipe.start(config)
RuntimeError: No device connected

Process finished with exit code 1

I have also tried to circumvent this by connecting a D435 camera and uncommenting disable_all_streams, but the same error occurs:

[<pyrealsense2.device: Intel RealSense D435 (S/N: 938422073181)>, <pyrealsense2.device: Software-Device
Intel RealSense D435 (Emulated) (S/N: 1112)>]
Traceback (most recent call last):
  File "/home/kouta/PycharmProjects/rs-postprocessing-remotely/after_process.py", line 60, in <module>
    prof = pipe.start(config)
RuntimeError: No device connected

Process finished with exit code 1

Also, trying to follow the C++/C# examples for software_device does not work as the method create_matcher is not enabled in the pyrealsense wrapper.

So, how can one simulate a camera on a server running python code? using .bag files does not work for our usecase as it is expected to work with packages sent by an edge device in realtime. Thanks in advance!

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Aug 7, 2020

Hi @kouta-kun Your project sounds as though it would be suited to an open-source ethernet networking arrangement like the one described in Intel's RealSense open-source networking white-paper document. Using two software components (a tool called rs-server and an realsense2-net module), the individual cameras are attached to Raspberry Pi 4 boards (the remote computers) and the data from each camera is sent to a central computer (the host) that can access the data.

https://dev.intelrealsense.com/docs/open-source-ethernet-networking-for-intel-realsense-depth-cameras

The paper states that although Pi 4 boards are used in the paper's example network, the system can be applied with minor modification to other compute boards.

@kouta-kun
Copy link
Author

Hi @MartyG-RealSense , we have already evaluated using such a system (we first saw https://github.com/IntelRealSense/librealsense/tree/master/wrappers/python/examples/ethernet_client_server which we modified) for some tasks in the system, however some specific frames are being preprocessed in the jetson (object detection using pytorch), so we cannot use a method that requires exclusive ownership of the camera. Would it be possible to interface with rs-net module so that we can also use the frames on an application running on the device?

@MartyG-RealSense
Copy link
Collaborator

The Further Research section at the end of the open source ethernet networking paper, which offers suggestions for how the project could be expanded upon, speculates that "computational resources of the Raspberry Pi can be utilized to perform additional post-processing tasks".

@MartyG-RealSense
Copy link
Collaborator

I also recalled a case about streaming depth data over a network with gstreamer:

#6465

@MartyG-RealSense
Copy link
Collaborator

Hi @kouta-kun Do you require further assistance with this case, please? Thanks!

@kouta-kun
Copy link
Author

Hi @MartyG-RealSense, we managed to compile librealsense with CUDA on our device so we are not currently seeing a bottleneck for which we need remote processing, thanks for asking!

@MartyG-RealSense
Copy link
Collaborator

Great news - thanks so much for the update! I will close this case now as you found a solution. Please do create a new case in future if you have any problems.

@anhTuan0712
Copy link

Hi @kouta-kun , let me ask this information where you can find it, you can give me reference information. "Also, trying to follow the C++/C# examples for software_device does not work as the method create_matcher is not enabled in the pyrealsense wrapper.
"

@kouta-kun
Copy link
Author

Hi @anhTuan0712, by looking at the binding classes under https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python I was not able to find any python binding for the create_matcher function. I tried to bind it myself but I was not able to get it working, although I may have missed another way to access it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants