I'm trying to use MOG background subtraction but the "history" function doesn't seem to work.
OpenCV 2.4.13
Python (2.7.6)
Observation: The program appears to use the very first frame it captures for all future background subtractions.
Expectation: The background should slowly evolve based on the "history" parameter, so that if the camera angle changes, or if a person/object leaves the field-of-view, the "background" image will change accordingly.
import cv2
import numpy as np
cap = cv2.VideoCapture(0)
mog = cv2.BackgroundSubtractorMOG(history=10, nmixtures=5, backgroundRatio=0.25)
while True:
ret, img1 = cap.read()
if ret is True:
cv2.imshow("original", img1)
img_mog = mog.apply(img1)
cv2.imshow("mog", img_mog)
if cv2.waitKey(10) & 0xFF == ord(q):
break
video_capture.release()
cv2.destroyAllWindows()
Thank you in advance for your help!
Related
I am new to OpenCV. I want to display video/webcam feed in OpenCV.I have written the following code
import cv2
cap = cv2.VideoCapture(0)
while True:
ret,img = cap.read()
cv2.imshow("Frame",img)
Instead of getting webcam feed or video, I get a black screen with no output as shown in the picture
You need to add cv.waitKey(1) or some other ms number as you wish. This will display a frame for 1 ms. Check the example here:
if cv2.waitKey(1) & 0xFF == ord('q'):
break
I am trying to write a script to manipulate video from a webcam. I am trying to do this through OpenCV with Python, but I am running into some issues.
If I run the video capture stream with no pixel manipulation applied, the stream works fine and has a smooth frame rate. However, I applied a threshold loop as a test, and my stream undergoes major lag and updates once every few seconds. Any ideas if it is possible to optimise this? Ideally, I am looking to get a 30 fps stream with the video manipulation applied. Here is the code:
import cv2
import numpy as np
cap = cv2.VideoCapture(0)
T = 100
while True:
ret, frame = cap.read()
height, width, channels = frame.shape
for x in range(width):
for y in range(height):
if frame[y,x,0] < T:
frame[y,x]=0
else:
frame[y,x]=255
cv2.imshow('frame', frame)
if cv2.waitKey(1) == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
accessing Pixel by pixel in image processing in general is very bad practice as it slow the performance very much, packages like opencv and numpy has optimized this by doing matrix operations allowing your program to be much more faster, here is a sample code that will perform your task but much more faster.
import cv2
import numpy as np
cap = cv2.VideoCapture(0)
T = 100
while True:
ret, frame = cap.read()
height, width, channels = frame.shape
B,G,R = cv2.split(frame)
# for x in range(width):
# for y in range(height):
# if frame[y,x,0] < T:
# frame[y,x]=0
# else:
# frame[y,x]=255
_,B = cv2.threshold(B,T,255,cv2.THRESH_BINARY)
frame = cv2.merge((B,G,R))
cv2.imshow('frame', frame)
if cv2.waitKey(1) == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
I have recently bought a stereo camera through Amazon and I want to use it for depth mapping. The problem is that the output that I get from the camera is in the form of a single video with the output of both the cameras.
What I want is two seprate outputs from the single usb port if it is possible.I can use cropping but I dont want to use that because i am trying to reduce the processing time and I want the outputs sepratley.
The obove image was generated from the following code
import numpy as np
import cv2
cam = cv2. VideoCapture(1)
cam.set(cv2.CAP_PROP_FPS, 120)
cam.set(cv2.CAP_PROP_FRAME_WIDTH, 1280)
cam.set(cv2.CAP_PROP_FRAME_HEIGHT, 720)
while(1):
s,orignal = cam.read()
cv2.imshow('original',orignal)
if cv2.waitKey(1) & 0xFF == ord('w'):
break
cam.release()
cv2.destroyAllWindows()
I have also tried other techniques such as:
import numpy as np
import cv2
left = cv2.VideoCapture(1)
right = cv2.VideoCapture(2)
left.set(cv2.CAP_PROP_FRAME_WIDTH, 720)
left.set(cv2.CAP_PROP_FRAME_HEIGHT, 720)
right.set(cv2.CAP_PROP_FRAME_WIDTH, 1280)
right.set(cv2.CAP_PROP_FRAME_HEIGHT, 720)
left.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(*"MJPG"))
right.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(*"MJPG"))
# Grab both frames first, then retrieve to minimize latency between cameras
while(True):
_, leftFrame = left.retrieve()
leftWidth, leftHeight = leftFrame.shape[:2]
_, rightFrame = right.retrieve()
rightWidth, rightHeight = rightFrame.shape[:2]
# TODO: Calibrate the cameras and correct the images
cv2.imshow('left', leftFrame)
cv2.imshow('right', rightFrame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
left.release()
right.release()
cv2.destroyAllWindows()
but they are not recognising the 3rd camera any help would be nice.
My openCV version is 3.4
P.S If anyone can present a soloution in c++ it would also work for me
Ok so after analysing the problem I figured that the best way would be to crop the images in half as it saves processing time. If you have two different image sources then your pipeline time is doubled for getting these images. After testing the stereo camera using cropping and without cropping I saw no noticeable change in the FPS. Here is a simple code for cropping the video and displaying it in two different windows.
import numpy as np
import cv2
cam = cv2. VideoCapture(1)
cam.set(cv2.CAP_PROP_FPS, 120)
cam.set(cv2.CAP_PROP_FRAME_WIDTH, 1280)
cam.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
s,orignal = cam.read()
height, width, channels = orignal.shape
print(width)
print(height)
while(1):
s,orignal = cam.read()
left=orignal[0:height,0:int(width/2)]
right=orignal[0:height,int(width/2):(width)]
cv2.imshow('left',left)
cv2.imshow('Right',right)
if cv2.waitKey(1) & 0xFF == ord('w'):
break
cam.release()
cv2.destroyAllWindows()
[
I am taking input from a video and I want to take the median value of the first 5 frames so that I can use it as background image for motion detection using deferential.
Also, I want to use a time condition that, say if motion is not detected then calculate the background again, else wait t seconds. I am new to opencv and I don't know how to do it.. Please help
Also, I want to take my video in 1 fps but this does not work. Here is the code I have:
import cv2
BLUR_SIZE = 3
NOISE_CUTOFF = 12
cam = cv2.VideoCapture('gh10fps.mp4')
cam.set(3, 640)
cam.set(4, 480)
cam.set(cv2.cv.CV_CAP_PROP_FPS, 1)
fps=cam.get(cv2.cv.CV_CAP_PROP_FPS)
print "Current FPS: ",fps
If you really want the median of the first 5 frames, then following should do what you are looking for:
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
frames = []
for _ in range(5):
ret, frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
frames.append(gray)
median = np.median(frames, axis=0).astype(dtype=np.uint8)
cv2.imshow('frame', median)
cv2.waitKey(0)
cap.release()
cv2.destroyAllWindows()
Note, this is just taking the source from a webcam as an example.
What I need to do is fairly simple:
load a 5 frames video file
detect background
On every frames, one by one :
subtract background (create foreground mask)
do some calculations on foreground mask
save both original frame and foreground mask
Just to see the 5 frames and the 5 corresponding fgmasks :
import numpy as np
import cv2
cap = cv2.VideoCapture('test.avi')
fgbg = cv2.BackgroundSubtractorMOG()
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
fgmask = fgbg.apply(frame)
# Display the fgmask frame
cv2.imshow('fgmask',fgmask)
# Display original frame
cv2.imshow('img', frame)
k = cv2.waitKey(0) & 0xff
if k == 5:
break
cap.release()
cv2.destroyAllWindows()
Every frame gets opened and displayed correctly but the showed fgmask do not correspond to the showed original frame. Somewhere in the process, the order of the fgmasks gets mixed up.
The background does get subtracted correctly but I don't get the 5 expected fgmasks.
What am I missing ? I feel like this should be straightforward : the while loop runs over the 5 frames of the video and fgbg.apply apply the background subtraction function to each frame.
OpenCV version that I use is opencv-2.4.9-3
As bikz05 suggested, running average method worked pretty good on my 5 images sets. Thanks for the tip !
import cv2
import numpy as np
c = cv2.VideoCapture('test.avi')
_,f = c.read()
avg1 = np.float32(f)
avg2 = np.float32(f)
# loop over images and estimate background
for x in range(0,4):
_,f = c.read()
cv2.accumulateWeighted(f,avg1,1)
cv2.accumulateWeighted(f,avg2,0.01)
res1 = cv2.convertScaleAbs(avg1)
res2 = cv2.convertScaleAbs(avg2)
cv2.imshow('img',f)
cv2.imshow('avg1',res1)
cv2.imshow('avg2',res2)
k = cv2.waitKey(0) & 0xff
if k == 5:
break