OpenCV & Python - Real time image (frame) processing - python

We're doing a project in school where we need to do basic image processing. Our goal is to use every video frame for the Raspberry Pi and do real time image processing.
We've tried to include raspistill in our python-program but so far nothing has worked. The goal of our project is to design a RC-car which follows a blue/red/whatever coloured line with help from image processing.
We thought it would be a good idea to make a python-program which does all image processing necessary, but we currently struggle with the idea of bringing recorded images into the python program. Is there a way to do this with picamera or should we try a different way?
For anyone curious, this is how our program currently looks
while True:
#camera = picamera.PiCamera()
#camera.capture('image1.jpg')
img = cv2.imread('image1.jpg')
width = img.shape[1]
height = img.shape[0]
height=height-1
for x in range (0,width):
if x>=0 and x<(width//2):
blue = img.item(height,x,0)
green = img.item(height,x,1)
red = img.item(height,x,2)
if red>green and red>blue:

OpenCV already contains functions to process live camera data.
This OpenCV documentation provides a simple example:
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
Of course, you do not want to show the image but all your processing can be done there.
Remember to sleep a few hundred milliseconds so the pi does not overheat that much.
Edit:
"how exactly would I go about it though. I used "img = cv2.imread('image1.jpg')" all the time. What do I need to use instead to get the "img" variable right here? What do I use? And what is ret, for? :)"
ret indicates whether the read was successful. Exit program if not.
The read frame is nothing other than your img = cv2.imread('image1.jpg') so your detection code should work exactly the same.
The only difference is that your image does not need to be saved and reopened. Also for debugging purposes you can save the recorded image, like:
import cv2, time
cap = cv2.VideoCapture(0)
ret, frame = cap.read()
if ret:
cv2.imwrite(time.strftime("%Y%m%d-%H%M%S"), frame)
cap.release()

You can use picamera to acquire images.
To make it "real time", you can acquire data each X milliseconds. You need to set X depending on the power of your hardware (and the complexity of the openCV algorithm).
Here's an example (from http://picamera.readthedocs.io/en/release-1.10/api_camera.html#picamera.camera.PiCamera.capture_continuous) how to acquire 60 images per second using picamera:
import time
import picamera
with picamera.PiCamera() as camera:
camera.start_preview()
try:
for i, filename in enumerate(camera.capture_continuous('image{counter:02d}.jpg')):
print(filename)
time.sleep(1)
if i == 59:
break
finally:
camera.stop_preview()

Related

I am trying to capture still images from RTSP feed in Python; however the image keeps 'hanging/freezing'

So I've been scouring GitHub looking for answers but haven't yet found the solution, so I will be grateful for any help!
I am trying to make a DIY trail camera; and I have an IP camera providing me an RTSP feed, I want to capture this feed and take photos based on a PIR motion sensor (HC-SR50);
I am running this off a raspberry PI remotely; However the image is stuck on the first frame, and saves the first image from RTSP feed; and then saves and outputs the same image over and over; whilst imshow() shows the live feed fine (this is commented out below asit was interrupting the code)
I figured out that when I do imshow() it was alsostuck- and managed to resolve this by searching this site; (see code)
I am using the TAPO cameras.
the issue seems to be in the While loop where the pir_wait_for_motion begins;
thanks in advance for any help!!
from gpiozero import MotionSensor
import cv2
from datetime import datetime
import time
import getpass
**SO THIS PART WORKS OK
**
rtsp in
rtsp_url = 'rtsp://user:pass#IP/stream2'
#vlc-in
#output
writepath = "OUTPUTPATH"
pir = MotionSensor(4)
cap = cv2.VideoCapture(rtsp_url)
frameRate = cap.get(5)
Just to show that the RTSP feed was working, all ok so commented out for now as it blocked the rest of the code from running. (Below) So this part isn't so necessary for now.
- while cap.isOpened():
- flags, frame = cap.read()
- gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
- cv2.startWindowThread()
- cv2.imshow('RGB OUTPUT', gray)
- key = cv2.waitKey(1)
- cv2.destroyAllWindows()
The below part is where the problem seems to be, however I can't figure out how to keep the frames moving from RTSP feed.
#Image Capture while cap.isOpened():
pir.wait_for_motion()
print("Motion")
ret, frame = cap.read()
if (ret != True):
break
cc1 = datetime.now()
c2 = cc1.strftime("%Y")
c3 = cc1.strftime("%M")
c4 = cc1.strftime("%D")
c5 = cc1.strftime("%H%M%S")
hello = "image"+c2+c3+c5
hellojoin = "".join(hello.split())
#photo write
cv2.imwrite(f'{writepath}/{hellojoin}.png', frame)
print("image saved!")
pir.wait_for_no_motion()
cap.release() cv2.destroyAllWindows()
I wanted a PIR motion sensor to capture images from RTSP based on activity infront of sensor; basically acting as a trail camera/camera trap would.

OpenCV's VideoCapture malfunctioning when fed from IP Camera

I'm simply trying to read IP Camera live stream through OpenCV's simple code, i.e as follows:
import numpy as np
import cv2
src = 'rtsp://id:pass#xx.xx.xx.xx'
cap = cv2.VideoCapture(src)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
The problem here is, sometime it works like a charm by showing the running live video, but sometime else it creates a lot of blank windows which keeps popping up until the job is killed. Like the below image:
Why does it happen, also how can we avoid it?
Maybe you should cover the case that the video capture fails to establish a healthy stream.
Note that it is possible to not to receive a frame in some cases even though video capture opens. This can happen due to various reasons such as congested network traffic, insufficient computational resources, power saving mode of some IP cameras.
Therefore, I would suggest you to check in the frame size and make sure that your VideoCapture object is receiving the frame at right shape. (You can debug and see the size of a visible frame to learn the expected resolution of the camera.)
A change in your loop like following might help
min_expected_frame_size = [some integer]
while(cap.isOpened()):
ret, frame = cap.read()
width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)
height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
if ret==True and ((width*height) >= min_expected_frame_size):
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break

Python OpenCV reads video in low quality

I'm trying to write a program that cuts my gameplay clips for me. In order to achieve what I want I need to detect a red "X" (indicates a kill) in the middle of the frame, which is a 12px * 12px region, so I need full quality. While I was debugging I realized that the frames read by OpenCV seemed lower quality than what I see in a video player.
Here is an example:
This is the "X" I've cut from a video player
This is the "X" that OpenCV read in
Both are from the same frame.
Here is the code I was debugging with:
import cv2
import numpy as np
vid = cv2.VideoCapture(path)
success, frame = vid.read()
while success:
cv2.imshow("Frame", frame)
# Pause video
if cv2.waitKey(1) & 0xFF == ord('s'):
cv2.waitKey(0)
# Quit video
if cv2.waitKey(1) & 0xFF == ord('q'):
break
success, frame = vid.read()
cv2.destroyAllWindows()
I've also checked the width and the height and they seem to be the same as the original video. I'm using OpenCV version 4.2.0. What could cause this issue?
Any help is appreciated.
If you are trying to read the frame from a file and want to change it's resolution, you'd probably want to use the resize method as described here. This would need to be done inside the loop, right after you read in the frame. It could be something like:
resize(ret, Size(width, height), 0, 0, INTER_CUBIC);
Example :
b = cv2.resize(frame,(1280,720),fx=0,fy=0, interpolation = cv2.INTER_CUBIC)
I hope this helps.
I figured it out. The only thing I had to do is to convert the picture to jpeg (for some reason). I used this function to convert it in memory:
def convertToJpeg(img):
result, encoded = cv2.imencode('.jpg', img, [cv2.IMWRITE_JPEG_QUALITY, 100])
return cv2.imdecode(encoded, 1)
Thanks for all the help!

Python import restarts other imports

I want to create a head tracking interface for a flight simulator. I'm going to use opencv for image acquisition and processing and use vjoy to emulate the joystick (which the flight simulator can pick up & recognise).
As part of developing a toolkit of routines I have a program which highlights the brightest & darkest points in a captured frame which works well.
import numpy as np
import pyvjoy #works if I comment this line out
import cv2
cap = cv2.VideoCapture(0)
#j = pyvjoy.VJoyDevice(1)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame - press q to exit',gray)
# vjoy lines commented out
# xAxis=xmin*4000
# j.data.wAxisX = 0x2000
# j.set_axis(HID_USAGE_X, xAxis)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
This code works if I comment out the import of the vjoy library. However, when I import vjoy it forces a reset of the shell even though there are no other active libraries or lines. How can I prevent this?

OpenCV: Face detection taking advantage of a command line

I run this (first one) example that launches the webcam of my latop so that I can see myself on the screen.
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
I installed OpenBr on Ubuntu 14.04 LTS and I run successfully this command on a picture of myself:
br - gui -algorithm ShowFaceDetection -enrollAll -enroll /home/nakkini/Desktop/myself.png
The above command I run on the Terminal displays my picture and draws a square around my face (face detection), it also highlights my eyes in green.
My Dream:
I wonder if there is a way to combine this command with the short program above so that when the webcam is launched I can see my face surrounded by the green rectangle ?
Why do I need this ?
I found similar programs in pure OpenCV/Python for this purpos. However, for later needs, I need more things than the simple face detection and I judge by myself that OpenBR will save me lot of headache. That is why I am looking for a way to run the command line somewhere inside the code above as a first but big step.
Hints:
The frame in the code corresponds to myself.png of the command line. The solution to be found will try to pass frame in the place of myself.png to the command line within the program itself.
Thank you very much in advance.
EDIT:
After correcting the typos of #Xavier's solution I have no errors. However the program does not run as I want it:
First, the camer is launched and I see myself but my face is not detected with a green rectangle. Secondly, I press any key to exit but the program does not exit: it shows me a picture of myself with my face detected. A last key press exists the program. My goal is to see my face detected during the camera functionment.
you do not need openbr for this at all.
just see opencv's python face-detect tutorial
something like this should work
import numpy as np
import cv2
import os
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
cv2.imwrite( "/home/nakkini/Desktop/myself.png", gray );
os.system('br - gui -algorithm -ShowFaceDetection -enrollAll -enroll /home/nakkini/Desktop/myself.png')
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()

Categories