I am fairly new to Python and Computer Vision but I've managed to setup a basic script to open my CCTV camera's at home. It works great but I have a slight issue is is that it loops over each camera one after the other, so instead of the camera image updating every second, it updates around every 5 seconds - the time it takes to complete a loop.
What I want is to be able to use multiprocessing / multi threading so that when I call the function for each of my cameras it open ins a new pool, running parallel to each other, therefore updating each camera every second.
As you can see in the below code I call each of the cameras using the same function, but with different argurments. I've read up on Process and have tried a few variations but I don't seem to be able to get it working.
I'm sure its a simple one and it would be great if someone could point me in the right direction.
Here's the code:
# import libraries
from threading import Thread
import imutils
import cv2, time
import argparse
import numpy as np
import datetime
from imutils.video import WebcamVideoStream
from multiprocessing import Process
stream = WebcamVideoStream('rtsp://mylink1./' ).start() # To open any valid video file stream/network stream. In this case 'test.mp4' file.
stream2 = WebcamVideoStream('rtsp://mylink2./' ).start()
stream3 = WebcamVideoStream('rtsp://mylink3./' ).start()
stream4 = WebcamVideoStream('rtsp://mylink4./' ).start()
stream5 = WebcamVideoStream('rtsp://mylink5./' ).start()
def checkimage(stream,camname):
global lastmessage
try:
frame = stream.read()
(h, w) = frame.shape[:2]
cv2.imshow(camname, frame)
print('[INFO]Checked ' + str(camname) + ' at ' + datetime.datetime.now().strftime("%H:%M:%S") + '...')
except AttributeError:
pass
# infinite loop
while True:
checkimage(stream,"Back Door Camera")
checkimage(stream2,"Conservatory Camera")
checkimage(stream3,"Front Door Camera")
checkimage(stream4,"Garage Camera")
checkimage(stream5,"Shed Camera")
key = cv2.waitKey(1) & 0xFF
# check for 'q' key-press
if key == ord("q"):
if 'q' key-pressed break out
break
cv2.destroyAllWindows()
# close output window
stream.stop()
# safely close video stream.
Thanks in advance!
Chris
Related
This is what I have at the moment but its getting really slow read times. It takes about 6-8 seconds alone just to read the frame with opencv and for my project I need to be able to get a picture at specific intervals as it reads a pressure transducer. Is there a way to make this program faster with cv2 or can is there a way using arrays or what not to do this much quicker.
import cv2
import timeit
def main(): #Define camera function
start = timeit.default_timer() #Starts a runtime timer
hud_Cam = cv2.VideoCapture(0) #Call camera resource
gauge_Cam = cv2.VideoCapture(1)
rval, hud_Img = hud_Cam.read() #Read/Grab camera frame
rval, gauge_Img = gauge_Cam.read()
stop = timeit.default_timer() #Stops a runtime timer
print('Time: ', stop - start) #Calculate runtime timer
start1 = timeit.default_timer() #Starts a runtime timer
hud_name = ('HudPicture0.jpg') #Initialize file names
gauge_name = ('GaugePicture0.jpg')
cv2.imwrite(hud_name, hud_Img) #Write camera frame to file names
cv2.imwrite(gauge_name, gauge_Img)
print("Hud Picture written!") #Log to console
print("Gauge Picture written!")
hud_Cam.release() #Release camera resource to clear memory
gauge_Cam.release()
stop1 = timeit.default_timer() #Stops a runtime timer
print('Time: ', stop1 - start1) #Calculate runtime timer```
As I understand your question, you want to be able to take two images from two cameras as soon as possible after Labview requests it.
On Linux or macOS, I would start capturing continuously as soon as possible and then signal the capturing process using Unix signals. Unfortunately, you are using Windows and signals don't work so well there. So, I am going to use the filesystem to signal - if Labview wants a picture taken, it just creates a file with or without content called capture.txt and that makes the Python process save the current image. There are other more sophisticated methods, but this demonstrates the concept and as you learn more, you may replace the signalling mechanism with a write on a socket, or an MQTT message or something else.
I put the two cameras in two independent threads so they can work in parallel, i.e. faster.
#!/usr/bin/env python3
import cv2
import threading
import logging
from pathlib import Path
def capture(stream, path):
"""Capture given stream and save to file on demand"""
# Start capturing to RAM
logging.info(f'[captureThread]: starting stream {stream}')
cam = cv2.VideoCapture(stream, cv2.CAP_DSHOW)
while True:
# Read video continuously
_, im = cam.read()
# Check if Labview wants it
if CaptureFile.exists():
# Intermediate filename
temp = Path(f'saving-{stream}.jpg')
# Save image with temporary name
cv2.imwrite(str(temp), im)
# Rename so Labview knows it is complete and not still being written
temp.rename(path)
logging.info(f'[captureThread]: image saved')
break
logging.info(f'[captureThread]: done')
if __name__=="__main__":
# Set up logging - advisable when threading
format = "%(asctime)s: %(message)s"
logging.basicConfig(format=format, level=logging.INFO, datefmt="%H:%M:%S")
logging.info("[MAIN]: starting")
# Labview will create this file when a capture is required, ensure not there already
CaptureFile = Path('capture.txt')
CaptureFile.unlink(True)
# Create a thread for each camera and start them
HUDthread = threading.Thread(target=capture, args=(0, Path('HUD.jpg')))
Gaugethread = threading.Thread(target=capture, args=(1, Path('Gauge.jpg')))
HUDthread.start()
Gaugethread.start()
# Wait for both to exit
HUDthread.join()
Gaugethread.join()
logging.info("[MAIN]: done")
This got the time down to, Time:2.066412200000059
import timeit
start = timeit.default_timer()
import cv2
hud_Cam = cv2.VideoCapture(0, cv2.CAP_DSHOW) #Call camera resource
gauge_Cam = cv2.VideoCapture(1, cv2.CAP_DSHOW)
def main(hud_Cam, gauge_Cam, start): #Define camera function
while True:
rval, hud_Img = hud_Cam.read() #Read/Grab camera frame
rval, gauge_Img = gauge_Cam.read()
hud_name = ('HudPicture0.jpg') #Initialize file names
gauge_name = ('GaugePicture0.jpg')
cv2.imwrite(hud_name, hud_Img) #Write camera frame to file names
cv2.imwrite(gauge_name, gauge_Img)
print("Hud Picture written!") #Log to console
print("Gauge Picture written!")
hud_Cam.release() #Release camera resource to clear memory
gauge_Cam.release()
stop = timeit.default_timer()
print('Time: ', stop - start) #Calculate runtime timer
break
# =============================================================================
while True:
img1 = cv2.imread(hud_name)
img2 = cv2.imread(gauge_name)
cv2.imshow("Hud Image", img1)
cv2.imshow("Gauge Image", img2)
k = cv2.waitKey(60)
if k%256 == 27:
cv2.destroyAllWindows()
break
main(hud_Cam, gauge_Cam, start) #Call camera function```
Is it possible instead of copying a variable content to a given location ( stream.copy_to('motion.h264') ) save it in a variable to be further processed. Reason I would like to do so, is to be able to downgrade video resolution once the video buffer has been collected.
Thanks in advance!
import io
import random
import picamera
def motion_detected():
# Randomly return True (like a fake motion detection routine)
return random.randint(0, 10) == 0
camera = picamera.PiCamera()
stream = picamera.PiCameraCircularIO(camera, seconds=20)
camera.start_recording(stream, format='h264')
try:
while True:
camera.wait_recording(1)
if motion_detected():
# Keep recording for 10 seconds and only then write the
# stream to disk
camera.wait_recording(10)
stream.copy_to('motion.h264')
finally:
camera.stop_recording()
I wanna run a multiple object detection(YOLOv3) algorithm with different parameters.
I should use more CPU cores that by detecting and counting algorithms are too heavy to control.
So I made object class which can set different video names and GPU numbers but I am new about multiprocessing in python that I can't handle multiprocessing class well.
Features
1. Draw baselines in PySide2 or PyQt.
2. Select a GPU which will run the YOLOv3.
3. Count cars which crossed a specific line in a road.
But I wanna test if multiprocessing can run object class before really implementing.
# main.py
from stream import video
import multiprocessing as mp
print(mp.current_process())
process1 = mp.Process(target=video, args=('DJI_0474_2.MOV', 0))
process2 = mp.Process(target=video, args=('DJI_0474_3.MOV', 1))
process1.daemon = True
process2.daemon = True
process1.start()
process2.start()
# stream.py
import cv2
from threading import Timer
import multiprocessing as mp
# from yolov3 import YOLO
class video(object):
def __init__(self, name, gpu_num):
print(mp.current_process())
self.cap = cv2.VideoCapture(name)
# self.detector = YOLO(gpu_num)
self.timer = Timer(0.066, self.run)
self.timer.start()
def run(self):
ret, frame = self.cap.read()
if ret:
# frame = self.detector(frame)
cv2.imshow('frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
I made a simple code to test multiprocessing.
But when I initialize video class with two args('DJI_0474_2.MOV', 0), main.py closed itself.
I tried initializing before run multiprocessing but the error say "can't pickle cv2.VideoCapture objects".
# main2.py
from stream2 import video
import multiprocessing as mp
print(mp.current_process())
play = video('DJI_0474_2.MOV')
# process = mp.Process(target=video, args=('DJI_0474_2.MOV',))
process = mp.Process(target=play.run)
process.daemon = True
process.start()
# stream2.py
import cv2
from threading import Timer
import multiprocessing as mp
# from yolov3 import YOLO
class video(object):
def __init__(self, name ):
print(mp.current_process())
self.cap = cv2.VideoCapture(name)
def run(self):
self.timer = Timer(0.066, self.update)
self.timer.start()
def update(self):
ret, frame = self.cap.read()
if ret:
cv2.imshow('frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
I think that's because 'video class' is on Main core but 'run' is on other core, so they can't communicate at the same time.
I will try queue to communicate between cores but I am worried about the latency... so the best is launching multiprocessing in the object class.
Is there any solution that I can run multiprocessing in object class?
No ez way, what i can think is ros,boost, multithread and all kind of mutex. gonna take a while
The most stupid way is to build several yolo. say with gpu and various cpu version.
Calling each 1 at different terminal to start new thread.
Then split the trained model for different subset by retraining N model.
And each build version is calling a sub model
Then all finding findings to N txt file and then concatenated into one txt to avoide mutex lock issue.
I have a python 3.7 and OpenCV 3 programming running on windows 10 and raspberry pi. The program has three threads. The first thread is a main loop, the second thread is a camera, and the third thread is writing videos and images.
I searched many references and put together what I think is a way to catch the correct "stop" keyboard signals. I tested on win10 and RPI using the threading packages and it appears to work. I am wondering if there is a more approriate way to handle keyboard input? I had to put the keyboard handler in main as I wasn't able to catch it in the wile loop. The following example doesn't show threading but in my larger program it worked also.
import signal
import cv2
class main():
def __init__(self):
self.stop_by_signal = False
return
def signal_term_handler(self,signal, frame):
self.stop_by_signal = True
return
def run(self):
signal.signal(signal.SIGTERM, self.signal_term_handler)
self.cap = cv2.VideoCapture(0)
print("starting")
while True:
# read cameras and display
ret, frame = self.cap.read()
if not ret: break
cv2.imshow('frame',frame)
# look for kill, ctl-c or cv2.waitkey to stop
if self.stop_by_signal == True:
print("main: stopping loop via kill")
break
key = cv2.waitKey(1) & 0xFF
if key == ord('q') or key == 27:
print("main: cv2 esc or q entered to stop")
break
elif key != 255:
print('key: %s' % [chr(key)])
self.cap.release()
cv2.destroyAllWindows()
if __name__ == '__main__':
try:
main = main()
main.run()
except KeyboardInterrupt:
main.cap.release()
print("__main:__ ctl-c entered to stop.")
Looks like it is working well. For the kill command, it works without the -9 option.
I'm working on making a photo booth running on a Raspberry Pi using Python and OpenCV. I found a great code example of how to capture images from a web cam here. http://joseph-soares.com/programacao/python/python-opencv-salvar-imagem-da-camera-no-disco/
The code that is provided alone works perfect on both the Pi and on my Windows PC. When I start adding code to this I'm having issues. I can no longer see what the web cam is seeing on the Pi and on Windows it is hit or miss. They are both capturing pictures though. In Windows it will actually display the image taken in the frame. If I revert back to the original code, again it works just fine.
I'm basically using a loop to give a count down before the picture is taken and flashing a light on the Arduino to represent the digit output that will be added in. I originally thought it was a memory issue on the Pi but it not working on my desktop is making me think otherwise. Any help would be appreciated.
import time, cv
from datetime import datetime
import pyfirmata
import serial
#board = pyfirmata.Arduino('/dev/ttyACM0')
board = pyfirmata.Arduino('COM7')
#arduino.open()
# start an iterator thread so
# serial buffer doesn't overflow
iter8 = pyfirmata.util.Iterator(board)
iter8.start()
greenLED = board.get_pin('d:13:o')
debug = 1
def captureImage():
snapped = cv.QueryFrame(capture)
cv.SaveImage(datetime.now().strftime('%Y%m%d_%Hh%Mm%Ss%f') + '.jpg', snapped)
if __name__ == '__main__':
capture = cv.CaptureFromCAM(0)
cv.NamedWindow('Webcam')
try:
while True:
for n in range(0, 4):
frame = cv.QueryFrame(capture)
cv.ShowImage('Webcam', frame)
cv.WaitKey(10)
if debug: print "count down"
for i in range (0, 5):
print i
greenLED.write(1)
time.sleep(1)
greenLED.write(0)
time.sleep(0.2)
i+=1
greenLED.write(1)
time.sleep(0.2)
print "Say Cheese"
captureImage()
greenLED.write(0)
if debug: print "Waiting for 5 seconds"
time.sleep(5)
n+=1
break
capture = None
cv.DestroyAllWindows()
board.exit()
except KeyboardInterrupt:
cv.DestroyAllWindows()
board.exit()