This question already has an answer here:
How to capture multiple camera streams with OpenCV?
(1 answer)
Closed 10 months ago.
So I am attempting to capture from two cameras in openCV (python & windows 7). I capture from one camera just fine, youll also notice I am doing some funky stuff to the image but that doesn't matter. This is the code to attempt to use two
import cv
import time
cv.NamedWindow("camera", 1)
cv.NamedWindow("camera2", 1)
capture = cv.CaptureFromCAM(0)
capture2 = cv.CaptureFromCAM(1)
while True:
img = cv.GetMat(cv.QueryFrame(capture))
img2 = cv.GetMat(cv.QueryFrame(capture2))
dst_image = cv.CloneMat(img)
dst_image2 = cv.CloneMat(img2)
cv.ConvertScale(img, dst_image, 255, -59745.0)
cv.ConvertScale(img2, dst_image2, 255, -59745.0)
cv.ShowImage("camera", dst_image)
cv.ShowImage("camera2", dst_image2)
if cv.WaitKey(10) == 27:
cv.DestroyWindow("camera")
cv.DestroyWindow("camera2")
break
Rather simple. However it won't work. Upon trying to create the matrix from the second camera (second line of code in the loop), I am told that the capture is null. The cameras I am using are logitech and are the same model.
Side note: I also couldnt find the command to count cameras connected in python, so if someone could refer me to that I'd much appreciate it.
--Ashley
EDIT:
It might also be useful to know that windows often prompts me to choose which camera I would like to use. I can't seem to avoid this behavior. Additionally I downloaded some security like software that successful runs both cameras at once. It is not open source or anything like that. So clearly, this is possible.
I was having the same problem with two lifecam studio webcams. After a little reading, I think that problem related to overloading the bandwidth on the USB-bus. Both cameras began working if I 1.) lowered the resolution (320 x 240 each) or 2.) lowered the frame rate (~99 msec # 800 x 600). Attached is the code that got I working:
import cv
cv.NamedWindow("Camera 1")
cv.NamedWindow("Camera 2")
video1 = cv.CaptureFromCAM(0)
cv.SetCaptureProperty(video1, cv.CV_CAP_PROP_FRAME_WIDTH, 800)
cv.SetCaptureProperty(video1, cv.CV_CAP_PROP_FRAME_HEIGHT, 600)
video2 = cv.CaptureFromCAM(1)
cv.SetCaptureProperty(video2, cv.CV_CAP_PROP_FRAME_WIDTH, 800)
cv.SetCaptureProperty(video2, cv.CV_CAP_PROP_FRAME_HEIGHT, 600)
loop = True
while(loop == True):
frame1 = cv.QueryFrame(video1)
frame2 = cv.QueryFrame(video2)
cv.ShowImage("Camera 1", frame1)
cv.ShowImage("Camera 2", frame2)
char = cv.WaitKey(99)
if (char == 27):
loop = False
cv.DestroyWindow("Camera 1")
cv.DestroyWindow("Camera 2")
here is a small code:
import VideoCapture
cam0 = VideoCapture.Device(0)
cam1 = VideoCapture.Device(1)
im0 = cam0.getImage()
im1 = cam1.getImage()
im0 and im1 are PIL images. You can now use scipy to convert it into arrays as follows:
import scipy as sp
imarray0 = asarray(im0)
imarray1 = asarray(im1)
imarray0 and imarray1 are numpy 2D arrays, which you can furthere use with openCV functions.
In case you are using windows for coding, why dont you try VideoCapture module. It is very easy to use and gives a PIL image as output. You can later change it to a 2D array.
Related
how can I display multiple cameras in one window?(OpenCv)
Using this code: Capturing video from two cameras in OpenCV at once , I open multiple cameras in separate windows, but I want to show them in one.
I found code for concanating images https://answers.opencv.org/question/188025/is-it-possible-to-show-two-video-feed-in-one-window/ but it doesn't work with cameras.
Same question was asked here previously, but no answer was given.
You can do this using numpy methods.
Option 1: np.vstack/np.hstack
Option 2: np.concatenate
Note 1: The methods will fail if you have different frames sizes because you are trying to do operation on matrices of different dimensions. That's why I resized one of the frames to fit the another.
Note 2: OpenCV also has hconcat and vconcat methods but I didn't try to use them in python.
Example Code: (using my Camera feed and a Video)
import cv2
import numpy as np
capCamera = cv2.VideoCapture(0)
capVideo = cv2.VideoCapture("desk.mp4")
while True:
isNextFrameAvail1, frame1 = capCamera.read()
isNextFrameAvail2, frame2 = capVideo.read()
if not isNextFrameAvail1 or not isNextFrameAvail2:
break
frame2Resized = cv2.resize(frame2,(frame1.shape[0],frame1.shape[1]))
# ---- Option 1 ----
#numpy_vertical = np.vstack((frame1, frame2))
numpy_horizontal = np.hstack((frame1, frame2))
# ---- Option 2 ----
#numpy_vertical_concat = np.concatenate((image, grey_3_channel), axis=0)
#numpy_horizontal_concat = np.concatenate((frame1, frame2), axis=1)
cv2.imshow("Result", numpy_horizontal)
cv2.waitKey(1)
Result: (for horizontal concat)
so I have a Python program on my Raspberry Pi running on an infinite while loop that takes an image from the camera every second.
With every iteration, the program creates a thread which uses the image to process. This process consists of: a script extracts a phone screen out of the image using OpenCV and another script to extract a QR code out of that screen.
I used the program to use pre-taken images to process them without problem but it only chokes if I put the program through a continuous loop.
After a few iterations of the loop and trying to process the images, the program unexpectedly breaks and my Raspberry Pi abruptly shuts off. Does anyone know why this is happening?
I have been looking around for an answer but my suspicions are with the threads. Either, I'm overloading the CPU or RAM, there are memory leaks, the Raspberry Pi is using too much power
EDIT:
From the comments below it seems that the power supply to the Pi seems to be the problem. I'm currently running on a phone charger (5.0v, 1.0A) which is (very) below the 5.0v, 2.5A recommended power supply, darn me. I will update this post when I get a new power supply and test the code out.
Also, running the program on my Windows laptop poses no problems at all.
This is my main script:
import picamera
import threading
import time
from processing.qr import get_qr_code
from processing.scan import scan_image
# Thread method
def extract_code(file):
print 'Thread for: ' + file
# Scans image to extract phone screen from image and then gets QR code from it
scan_image(file)
get_qr_code(file)
return
# End methods
camera = picamera.PiCamera()
while True:
time.sleep(1)
# Make epoch time as file name
time_epoch = str(int(time.time()))
image_path = "images/" + time_epoch + ".jpg"
print "Taking photo: " + str(image_path)
camera.capture(image_path)
# Create thread to start processing image
t = threading.Thread(target=extract_code, args=[time_epoch])
t.start()
Below is the script to scan image (scan.py)
In a nutshell, it takes an image, blurs it, finds edges and draws contours, checks to see if there's a rectangle (e.g a phone screen) and transforms and warps it into a new image with only the phone screen.
from transform import four_point_transform
import imutils
import numpy as np
import argparse
import cv2
import os
def scan_image(file):
images_dir = "images/"
scans_dir = "scans/"
input_file = images_dir + file + ".jpg"
print "Scanning image: " + input_file
# load the image and compute the ratio of the old height
# to the new height, clone it, and resize it
image = cv2.imread(input_file)
ratio = image.shape[0] / 500.0
orig = image.copy()
image = imutils.resize(image, height = 500)
# convert the image to grayscale, blur it, and find edges
# in the image
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (5, 5), 0)
edged = cv2.Canny(gray, 75, 200)
# show the original image and the edge detected image
print "STEP 1: Edge Detection"
# find the contours in the edged image, keeping only the
# largest ones, and initialize the screen contour
_, cnts, _ = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
cnts = sorted(cnts, key = cv2.contourArea, reverse = True)[:5]
screenCnt = 0
# loop over the contours
for c in cnts:
# approximate the contour
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.02 * peri, True)
# if our approximated contour has four points, then we
# can assume that we have found our screen
if len(approx) == 4:
screenCnt = approx
break
# show the contour (outline) of the piece of paper
print "STEP 2: Find contours of paper"
# if screenCnt > 0 :
# cv2.drawContours(image, [screenCnt], -1, (0, 255, 0), 2)
# apply the four point transform to obtain a top-down
# view of the original image
if screenCnt > 0:
warped = four_point_transform(orig, screenCnt.reshape(4, 2) * ratio)
print "STEP 3: Apply perspective transform"
output = scans_dir + file + "-result.png"
if not os.path.exists(scans_dir):
os.makedirs(scans_dir)
cv2.imwrite(output, imutils.resize(warped, height = 650))
else:
print "No screen detected"
This is the code to scan a QR code out of the image:
import os
from time import sleep
def get_qr_code(image_name):
scans_dir = "scans/"
codes_dir = "codes/"
input_scan_path = scans_dir + image_name + "-result.png"
output_qr_path = codes_dir + image_name + "-result.txt"
if not os.path.exists(codes_dir):
os.makedirs(codes_dir)
if os.path.exists(input_scan_path):
os.system("zbarimg -q " + input_scan_path + " > " + output_qr_path)
if os.path.exists(output_qr_path):
strqrcode = open(output_qr_path, 'r').read()
# print strqrcode
print "Results for " + image_name + ": " + strqrcode
else:
print "File does not exist"
After a few iterations of the loop and trying to process the images, the program unexpectedly breaks and my Raspberry Pi abruptly shuts off. Does anyone know why this is happening?
I had a similar experience. My program (C++) realtime processing camera frames unexpectedly crashed. Sometimes with segmentation error inside the OS code, sometimes just shutting off the whole board. Sometimes the program executable file just become zero bytes.
I taken long time looking for memory leaks or CPU peaks or other kind of programming error. Suddenly I realised that my power supply was not so strong. I changed to a new one more robust and I got no more problems.
See in this link the 2.5 A recommendation and check if your UPS actually meets this requirement.
Maybe not the answer you are looking for, but using Python with OpenCV on an embedded system is not really a good idea. I have experienced a lot of problems when using python on embedded systems (even with the powerful hardware that the rasp provides you).
All the performance problems I experienced vanished when I used C++ instead of Python. OpenCV on C++ is pretty solid and I highly recommend you to give it a try.
How can I extract frames from a video file using Python3?
For example, I want to get 16 picture from a video and combine them into a 4x4 grid.
I don't want 16 separate images at the end, I want one image containing 16 frames from the video.
----Edit----
import av
container = av.open('/home/uguraba/Downloads/equals/equals.mp4')
video = next(s for s in container.streams)
for packet in container.demux(video):
for frame in packet.decode():
if frame.index %3000==0:
frame.to_image().save('/home/uguraba/Downloads/equals/frame-%04d.jpg' % frame.index)
By using this script i can get frames. There will be lots of frames saved. Can i take specific frames like 5000-7500-10000 ?
Also my question is how can i see the total frame number ?
Use PyMedia or PyAV to access image data and PIL or Pillow to manipulate it in desired form(s).
These libraries have plenty of examples, so with basic knowledge about the video muxing/demuxing and picture editing you should be able to do it pretty quickly. It's not so complicated as it would seem at first.
Essentially, you demux the video stream into frames, going frame by frame.
You get the picture either in its original (e.g. JPEG) or raw form and push it into PIL/Pillow.
You do with it what you want, resizing etc... - PIL provides all necessary stuff.
And then you paste it into one big image at desired position.
That's all.
You can do that with OpenCV3, the Python wrapper and Numpy.
First you need to do is capture the frames then save them separately and finally paste them all in a bigger matrix.
import numpy as np
import cv2
cap = cv2.VideoCapture(video_source)
# capture the 4 frames
_, frame1 = cap.read()
_, frame2 = cap.read()
_, frame3 = cap.read()
_, frame4 = cap.read()
# 'glue' the frames using numpy and vertigal/horizontal stacks
big_frame = np.vstack((np.hstack((frame1, frame2)),
np.hstack((frame3, frame4))))
# Show a 4x4 unique frame
cv2.imshow('result', big_frame)
cv2.waitKey(1000)
To compile and install OpenCV3 and Numpy in Python3 you can follow this tutorial.
You can implement a kind of "control panel" from 4 different video sources with something like that:
import numpy as np
import cv2
cam1 = cv2.VideoCapture(video_source1)
cam2 = cv2.VideoCapture(video_source2)
cam3 = cv2.VideoCapture(video_source3)
cam4 = cv2.VideoCapture(video_source4)
while True:
more1, frame_cam1 = cam1.read()
more2, frame_cam2 = cam2.read()
more3, frame_cam3 = cam3.read()
more4, frame_cam4 = cam4.read()
if not all([more1, more2, more3, more4]) or cv2.waitKey(1) & 0xFF in (ord('q'), ord('Q')):
break
big_frame = np.vstack((np.hstack((frame_cam1, frame_cam2)),
np.hstack((frame_cam3, frame_cam4))))
# Show a 4x4 unique frame
cv2.imshow('result', big_frame)
print('END. One or more sources ended.')
I am looking for a way to concatenate a directory of images files (e.g., JPEGs) to a movie file (MOV, MP4, AVI) with Python. Ideally, this would also allow me to take multiple JPEGs from that directory and "paste" them into a grid which is one frame of a movie file. Which modules could achieve this?
You could use the Python interface of OpenCV, in particular a VideoWriter could probably do the job. From what I understand of the doc, the following would do what you want:
w = cvCreateVideoWriter(filename, -1, <your framerate>,
<your frame size>, is_color=1)
and, in a loop, for each file:
cvWriteFrame(w, frame)
Note that I have not tried this code, but I think that I got the idea right. Please tell me if it works.
here's a cut-down version of a script I have that took frames from one video and them modified them(that code taken out), and written to another video. maybe it'll help.
import cv2
fourcc = cv2.cv.CV_FOURCC(*'XVID')
out = cv2.VideoWriter('out_video.avi', fourcc, 24, (704, 240))
c = cv2.VideoCapture('in_video.avi')
while(1):
_, f = c.read()
if f is None:
break
f2 = f.copy() #make copy of the frame
#do a bunch of stuff (missing)
out.write(f2) #write frame to the output video
out.release()
cv2.destroyAllWindows()
c.release()
If you have a bunch of images, load them in a loop and just write one image after another to your vid.
I finally got into a working version of the project that got me into this question.
Now I want to contribute with the knowledge I got.
Here is my solution for getting all pictures in current directory and converting into a video having then centralized in a black background, so this solution works for different size images.
import glob
import cv2
import numpy as np
DESIRED_SIZE = (800, 600)
SLIDE_TIME = 5 # Seconds each image
FPS = 24
fourcc = cv2.VideoWriter.fourcc(*'X264')
writer = cv2.VideoWriter('output.avi', fourcc, FPS, DESIRED_SIZE)
for file_name in glob.iglob('*.jpg'):
img = cv2.imread(file_name)
# Resize image to fit into DESIRED_SIZE
height, width, _ = img.shape
proportion = min(DESIRED_SIZE[0]/width, DESIRED_SIZE[1]/height)
new_size = (int(width*proportion), int(height*proportion))
img = cv2.resize(img, new_size)
# Centralize image in a black frame with DESIRED_SIZE
target_size_img = np.zeros((DESIRED_SIZE[1], DESIRED_SIZE[0], 3), dtype='uint8')
width_offset = (DESIRED_SIZE[0] - new_size[0]) // 2
height_offset = (DESIRED_SIZE[1] - new_size[1]) // 2
target_size_img[height_offset:height_offset+new_size[1],
width_offset:width_offset+new_size[0]] = img
for _ in range(SLIDE_TIME * FPS):
writer.write(target_size_img)
writer.release()
Is it actually important to you that the solution should use python and produce a movie file? Or are these just your expectations of what a solution would look like?
If you just want to be able to play back a bunch of jpeg files as a movie, you can do it without using python or cluttering up your computer with .avi/.mov/mp4 files by going to vidmyfigs.com and using your mouse to select image files from your hard drive. The "movie" plays back in your Web browser.
I am writing a simple motion detection program but i want it to be cross platform so im using python and the pyglet library since it provides a simple way to load videos in different formats (specially wmv and mpeg). So far i have the code given below which loads the movie and plays it in a window. Now i need to:
1) grab frame at time t and t-1
2) do a subtraction to see which pixels are active for motion detection.
any ideas on how to grab frames and to skip over frames and is it possible to put the pixel values into a matrix in numpy or something directly from pyglet? or should look into using something other than pyglet?
thanks
kuaywai
import pyglet
import sys
window = pyglet.window.Window(resizable=True)
window.set_minimum_size(320,200)
window.set_caption('Motion detect 1.0')
video_intro = pyglet.resource.media('movie1.wmv')
player = pyglet.media.Player()
player.queue(video_intro)
print 'calculating movie size...'
if not player.source or not player.source.video_format:
sys.exit
myWidth = player.source.video_format.width
myHeight = player.source.video_format.height
if player.source.video_format.sample_aspect > 1:
myWidth *= player.source.video_format.sample_aspect
elif player.source.video_format.sample_aspect < 1:
myHeight /= player.source.video_format.sample_aspect
print 'its size is %d,%d' % (myWidth,myHeight)
player.play()
#window.event
def on_draw():
window.clear()
(w,h) = window.get_size()
player.get_texture().blit(0, h-myHeight,
width=myWidth,
height=myHeight)
pyglet.app.run()
To me it seems like you have to skip the play function and manually step through the video/animation, maybe using source.get_animation().frames which is a list of frames where each frame is a simple image. I'm guessing this isn't really going to be pracitical with large videos but that is generally not something you should be handling in Python anyway.