Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple overlaid timestamps on an image being sent by detect_motion #15

Closed
jeffbass opened this issue Aug 17, 2020 · 8 comments
Closed

Comments

@jeffbass
Copy link
Owner

Hi Stephen (@sbkirby ),
Thanks again for your debugging efforts on images captured by detect_motion. I used your "draw_time" yaml settings on one of my RPi cams that is sending at 640x480 resolution with a framerate setting of 16. It sent an image with two timestamps on the same image (and it also sent images with a single timestamp). I now have a reproducible example that shows it is not your use of the high resolution camera that is the problem. I'll try to debug this relatively soon and keep you posted. If you come up with anything, let me know.
Here's the image from my Driveway-Mailbox RPi with 2 timestamps on the same image.
Jeff

Driveway-Mailbox-2020-08-16T19 11 42 337177

@jeffbass
Copy link
Owner Author

Hi Stephen (@sbkirby),
The detect_motion method debugging is likely to take a few weeks. I will probably refactor the method quite a bit and try multiple alternatives. It will also need additional yaml parameters. I have created a branch here on GitHub named fix_motion in case you (or anyone else) wants to pull a branch to experiment with fixes to detect_motion. Please use that branch for any detect_motion fixes you might want to try / share; I'll likely push other branches with some of my own detect_motion experiments. I'm also going to be making a few other tweaks to other parts of imagenode; I'll push those to the master branch.
Thanks again for finding this (subtle and annoying) bug,
Jeff

@sbkirby
Copy link
Contributor

sbkirby commented Aug 18, 2020

Hey Jeff @jeffbass ,
I've written several test programs using bits and pieces of the VideoStream (PiVideoStream) Class of imutils to determine how fast this code could supply a frame from a stream. The results were disappointing. The VideoStream thread is capable of supplying frames as fast as the program can acquire them, but as we discovered many of the frames are identical to the next frame(s). Using the following code from PiVideoStream Class (which VideoStream uses) I was able to measure the actual speed to acquire a frame:


import os
import cv2
from picamera.array import PiRGBArray
from picamera import PiCamera
import time
from datetime import datetime

nightpath = './'
resolution = (1920,1456)
framerate = 30
cam = PiCamera()
cam.resolution = resolution
cam.framerate = framerate
rawCapture = PiRGBArray(cam, size=resolution)
stream = cam.capture_continuous(rawCapture,
			format="bgr", use_video_port=True)
i=0
time.sleep(3.0) # allow camera sensor to warm up

start = time.time()
# start loop
for f in stream:
	i += 1
	frame = f.array
	rawCapture.truncate(0)
	image_size = frame.shape  # actual image_size from this camera
	width, height = image_size[1], image_size[0]
	res_actual = (width, height)
	filename = nightpath + str(time.time())+'.jpg'
	display_time = datetime.now().isoformat(sep=' ', timespec='microseconds')
	cv2.putText(frame, display_time, (30,50), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,255,255), 1, cv2.LINE_AA)
	if i == 201:
		break

totaltime = time.time() - start
print('total time: ' + str(totaltime))

For the above resolution, it took about 51.85 seconds to fetch 200 frames or 3.85fps. A resolution of (640,480) took 6.75 seconds or 29.6fps running on an RPi 4B with the HQ camera module.

Next, I switched to the VideoStream thread and ran a similar test. I ran the code below in 0.05 seconds for 200 frames. I added SaveQueue to the script to save the images, but the timestamp overlay was prevalent on every image making them useless. Finally, I added a time.sleep() "governor" after cam.read(), and I was able to fetch frames/images without overlaid timestamps at the same frame rates calculated in the first test.


import os
import cv2
from imutils.video import VideoStream
import time
from datetime import datetime

nightpath = './'
resolution = (1920,1456)
framerate = 30
cam = VideoStream(usePiCamera=True,
	resolution=resolution,
	framerate=framerate).start()
time.sleep(3.0) # allow camera sensor to warm up

start = time.time()
# start loop
for i in range(200):
	frame = cam.read()
	#time.spleep(0.3)
	image_size = frame.shape  # actual image_size from this camera
	width, height = image_size[1], image_size[0]
	res_actual = (width, height)
	#print(str(res_actual))
	filename = nightpath + str(time.time())+'.jpg'
	display_time = datetime.now().isoformat(sep=' ', timespec='microseconds')
	cv2.putText(frame, display_time, (30,50), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,255,255), 1, cv2.LINE_AA)

totaltime = time.time() - start
print('total time: ' + str(totaltime))
cam.stop()

Note: I used the SaveQueue class below to save the images.

class SaveQueue:
    def __init__(self, maxlen=None):
        self.save_q = deque(maxlen=maxlen)
        self.keep_saving = True

    def __bool__(self):
        return self.keep_saving

    def __len__(self):
        return len(self.save_q)

    def append(self, text_and_image):
        self.save_q.append(text_and_image)

    def save_images_forever(self):
        # this will run in a separate thread
        while self.keep_saving:
            if len(self.save_q) > 0:  # send until save_q is empty
                text, image = self.save_q.popleft()
                cv2.imwrite(text, image)
                #pass
            else:
                #time.sleep(0.0000001) # sleep before checking save_q again
                pass

    def start(self):
        # start the thread to read frames from the video stream
        t = threading.Thread(target=self.save_images_forever)
        print('Starting threading')
        t.daemon = True
        t.start()

    def stop(self):
        self.keep_saving = False
        print('Stopping threading')

I appears as though this is a "Feature" of frames acquired from videostreams in a thread. The same frame remains available until the thread is capable of providing another.

@jeffbass
Copy link
Owner Author

jeffbass commented Aug 18, 2020

Hi Stephen @sbkirby,
Thanks for doing this. Great test. I think this may be the primary cause of the timestamp overlay problem. At first glance, I don't see an easy way to make sure VideoStream.cam only sends new frames rather than just providing the last available one again. Maybe use a threading.Event that sets / resets a new_frame_event that blocks a camera read until a new frame has actually been acquired. But then, you are essentially back to a unthreaded camera read. I did some quick tests (but not as careful as yours) when I first wrote detect_motion a couple of years ago. I was getting overlaid images then, too, but I didn't pursue it.

I'm going to use an unthreaded PiCamera.read (as a substitute for imutils.VideoSteam) for my subsequent detect_motion testing. I think this would be a relatively simple yaml option to add to the imagenode Camera class; I'll put that into the fix_motion branch in the next day or 2. I would use an unthreaded PiCameraUnthreadedStream class instead of VideoStream at line 685 of imaging.py. The PiCameraUnthreadedStream class would be a unthreaded PiCamera read like the one in your first code snippet above.

Your test is also making me think about the detect_motion algorithm again.

@jeffbass
Copy link
Owner Author

Hi Stephen @sbkirby,
I added a PiCameraUnthreadedStream class to the fix_motion branch. It reads the PiCamera without using the imutils.VideoStream class or using threading. I re-ran my tests from yesterday and the overlaid timestamps did not appear. The use of the unthreaded PiCameraUnthreadedStream is selected with the new yaml option threaded_read: False in the P1: cameras section:

cameras:
  P1:
    viewname: Mailbox
    resolution: (640, 480)
    framerate: 16
    threaded_read: False  # this is the new option; False selects PiCameraUnthreadedStream
    vflip: False
    exposure_mode: auto # night
    detectors:
      motion:
        ROI: (17,45),(85,61)
        draw_roi: ((255,0,0),1)
        send_frames: detected event # continuous 
        send_count: 2
        delta_threshold: 3
        min_motion_frames: 2
        min_still_frames: 2
        min_area: 3  # minimum area of motion as percent of ROI
        blur_kernel_size: 15  # Guassian Blur kernel size
        send_test_images: False
        draw_time: ((0,255,255),1)
        draw_time_org: (5,5)

Perhaps you could try out this branch and use that yaml option and tell me if it eliminates the overlaid timestamps for you as well.
Thanks,
Jeff

@sbkirby
Copy link
Contributor

sbkirby commented Aug 19, 2020

I'll try it out, and let you know.

@sbkirby
Copy link
Contributor

sbkirby commented Aug 21, 2020

I works fine, great job. Unfortunately, it reveals just how slow these RPi camera modules really are. Prior to working with these cameras, I always thought that ALL image processing software would be much slower than the hardware. Today, I'm not so sure. I'm going to play around with the exposure and gain control to see if I can increase the fps. Thanks for all your hard work.
Thanks,
Stephen

@jeffbass
Copy link
Owner Author

@sbkirby, Regarding image processing speed: I have found that doing any image transformation on the RPi slows down the image pipeline. I noticed you have a resize_width: 100 option in the bird_cam_rpi.yaml you contributed. I would recommend taking that one out; I have found that any resizing on the RPi causes a significant slowdown (and neither imutils or OpenCV check for "how much" the image is resizing, so taking it out is the fastest option). I will update the yaml docs to warn about this. "Resizing" the RPi image using resolution=(x,y) is much faster because it uses the RPi GPU rather than the OpenCV pixel interpolation process which does not. I have moved all my image transformations off of the RPi and onto the later computers in the pipeline in order to not slow the RPi image processing down.

@jeffbass
Copy link
Owner Author

I have merged the threaded_read yaml option discussed above into the master branch. Since setting it to False eliminates the multiple overlaid timestamps, I am closing this issue.

The threaded_read option defaults to True, which uses the imutils Videostream threaded camera reading. Per the above discussion, it also means duplicate images are possible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants