What I need to do is fairly simple:
load a 5 frames video file
detect background
On every frames, one by one :
subtract background (create foreground mask)
do some calculations on foreground mask
save both original frame and foreground mask
Just to see the 5 frames and the 5 corresponding fgmasks :
import numpy as np
import cv2
cap = cv2.VideoCapture('test.avi')
fgbg = cv2.BackgroundSubtractorMOG()
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
fgmask = fgbg.apply(frame)
# Display the fgmask frame
cv2.imshow('fgmask',fgmask)
# Display original frame
cv2.imshow('img', frame)
k = cv2.waitKey(0) & 0xff
if k == 5:
break
cap.release()
cv2.destroyAllWindows()
Every frame gets opened and displayed correctly but the showed fgmask do not correspond to the showed original frame. Somewhere in the process, the order of the fgmasks gets mixed up.
The background does get subtracted correctly but I don't get the 5 expected fgmasks.
What am I missing ? I feel like this should be straightforward : the while loop runs over the 5 frames of the video and fgbg.apply apply the background subtraction function to each frame.
OpenCV version that I use is opencv-2.4.9-3
As bikz05 suggested, running average method worked pretty good on my 5 images sets. Thanks for the tip !
import cv2
import numpy as np
c = cv2.VideoCapture('test.avi')
_,f = c.read()
avg1 = np.float32(f)
avg2 = np.float32(f)
# loop over images and estimate background
for x in range(0,4):
_,f = c.read()
cv2.accumulateWeighted(f,avg1,1)
cv2.accumulateWeighted(f,avg2,0.01)
res1 = cv2.convertScaleAbs(avg1)
res2 = cv2.convertScaleAbs(avg2)
cv2.imshow('img',f)
cv2.imshow('avg1',res1)
cv2.imshow('avg2',res2)
k = cv2.waitKey(0) & 0xff
if k == 5:
break
Related
I'm trying to make a python program with OpenCV, which opens the webcam and takes several images with different exposures in real time (40ms,95ms,150ms) and averages them in the end.
I tried to create a loop in which I change the exposure time, update the rendering (frame) and save it in a list, but the problem is that the display remains static and the rendering hardly changes (which gives after merging the images an image whose exposure time is almost 40)
I supposed that after setting exposure time, the frame update needs some time so I added the method time.sleep to suspend the execution for 3 seconds, but it was in vain.
Here is my code
import numpy as np
import cv2
import os
import time
capture = cv2.VideoCapture(0, cv2.CAP_V4L2)
while True:
(grabbed, frame) = capture.read()
if not grabbed:
break
# Resize frame
width = 1500
height = 1000
dim = (width, height)
frame = cv2.resize(frame, dim, interpolation=cv2.INTER_AREA)
cv2.imshow('RGB', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
if cv2.waitKey(1) == ord('h') or cv2.waitKey(1) == ord('H'):
#repertory = input("Enter the name of the directory: ")
if not os.path.exists(repertory):
os.mkdir(repertory)
exposure = [40,95,150]
ims = []
for i in exposure:
capture.set(cv2.CAP_PROP_EXPOSURE, i) # Setting Exposure
(grabbed, frame) = capture.read() # Updating frame
if grabbed:
cv2.imshow('RGB', frame) #Display
ims.append(frame)
# Convert to numpy
ims = np.array([np.array(im) for im in ims])
# average et conversion en uint8
imave = np.average(ims, axis=0)
imave = imave.astype(np.uint8)
# image HDR
cv2.imwrite(repertory + '/' + repertory + '_HDR8.jpg', imave)
capture.release()
cv2.destroyAllWindows()
Is there an optimal solution that allows to take pictures with differents exposure time in real time and in an automatic way?
i want to remove the duplication of objects, so when the camera opens it captures the first frame and save on the disk, than untill next object appears in the scene it saves the next object frame (does not save the same frame consecutively).
i have written a code to compare two consecutive frames of webcam, i want to store one frame in an array (max limit 3) to compare it with current frame. so the first frame will be saved on the disk and it compares untill the next object appears(used threshold value for this purpose)
How can i save the frame to an array and compare with current frame?
from skimage.metrics import structural_similarity
import imutils
import sys
import datetime
import cv2
import time
import numpy as np
cap = cv2.VideoCapture(0)
while (True):
# Capture frame-by-frame
ret, frame1 = cap.read(0) # first image
time.sleep(1/50) # slight delay
ret, frame2 = cap.read(0) # second image
gray1 = cv2.cvtColor(frame1, cv2.COLOR_BGR2GRAY)
gray2 = cv2.cvtColor(frame2, cv2.COLOR_BGR2GRAY)
# compute the Structural Similarity Index (SSIM) between the two
# images, ensuring that the difference image is returned
(score, diff) = structural_similarity (gray1, gray2, full=True)
diff = (diff * 255).astype ("uint8")
print ("SSIM: {}".format (score))
# threshold the difference image, followed by finding contours to
# obtain the regions of the two input images that differ
thresh = cv2.threshold (diff, 0, 255,
cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
if np.mean (thresh) < 0.4 :
print ("New object Detected")
date_string = datetime.datetime.now ( ).strftime ("%Y-%m-%d-%H:%M:%S")
cv2.imwrite ('img/img-' + date_string + '.png', frame2[y:y+h+30, x:x+w+30])
# Display the resulting frame
cv2.imshow ('frame1', frame1)
cv2.imshow('frame2', frame2)
cv2.imshow ("Diff", diff)
cv2.imshow ("Thresh", thresh)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything is done, release the capture
video_capture.release()
cv2.destroyAllWindows()
Not hard.
Actually, you can get the image as a NumPy array.
The shape is (720, 1280, 3).
To save it, try this
...
ret, frame1 = cap.read(0) # first image
print(frame1.shape)
rgb_frame1 = frame1[..., ::-1]
im = Image.fromarray(rgb_frame1)
im.save("your_file.jpeg")
time.sleep(1/50) # slight delay
...
Note: you need to change the channel order or you will get a blue image. Because the original channel is in BRG format.
Then you can store the frame:
I am trying to write a script to manipulate video from a webcam. I am trying to do this through OpenCV with Python, but I am running into some issues.
If I run the video capture stream with no pixel manipulation applied, the stream works fine and has a smooth frame rate. However, I applied a threshold loop as a test, and my stream undergoes major lag and updates once every few seconds. Any ideas if it is possible to optimise this? Ideally, I am looking to get a 30 fps stream with the video manipulation applied. Here is the code:
import cv2
import numpy as np
cap = cv2.VideoCapture(0)
T = 100
while True:
ret, frame = cap.read()
height, width, channels = frame.shape
for x in range(width):
for y in range(height):
if frame[y,x,0] < T:
frame[y,x]=0
else:
frame[y,x]=255
cv2.imshow('frame', frame)
if cv2.waitKey(1) == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
accessing Pixel by pixel in image processing in general is very bad practice as it slow the performance very much, packages like opencv and numpy has optimized this by doing matrix operations allowing your program to be much more faster, here is a sample code that will perform your task but much more faster.
import cv2
import numpy as np
cap = cv2.VideoCapture(0)
T = 100
while True:
ret, frame = cap.read()
height, width, channels = frame.shape
B,G,R = cv2.split(frame)
# for x in range(width):
# for y in range(height):
# if frame[y,x,0] < T:
# frame[y,x]=0
# else:
# frame[y,x]=255
_,B = cv2.threshold(B,T,255,cv2.THRESH_BINARY)
frame = cv2.merge((B,G,R))
cv2.imshow('frame', frame)
if cv2.waitKey(1) == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
I'm trying to use MOG background subtraction but the "history" function doesn't seem to work.
OpenCV 2.4.13
Python (2.7.6)
Observation: The program appears to use the very first frame it captures for all future background subtractions.
Expectation: The background should slowly evolve based on the "history" parameter, so that if the camera angle changes, or if a person/object leaves the field-of-view, the "background" image will change accordingly.
import cv2
import numpy as np
cap = cv2.VideoCapture(0)
mog = cv2.BackgroundSubtractorMOG(history=10, nmixtures=5, backgroundRatio=0.25)
while True:
ret, img1 = cap.read()
if ret is True:
cv2.imshow("original", img1)
img_mog = mog.apply(img1)
cv2.imshow("mog", img_mog)
if cv2.waitKey(10) & 0xFF == ord(q):
break
video_capture.release()
cv2.destroyAllWindows()
Thank you in advance for your help!
I am taking input from a video and I want to take the median value of the first 5 frames so that I can use it as background image for motion detection using deferential.
Also, I want to use a time condition that, say if motion is not detected then calculate the background again, else wait t seconds. I am new to opencv and I don't know how to do it.. Please help
Also, I want to take my video in 1 fps but this does not work. Here is the code I have:
import cv2
BLUR_SIZE = 3
NOISE_CUTOFF = 12
cam = cv2.VideoCapture('gh10fps.mp4')
cam.set(3, 640)
cam.set(4, 480)
cam.set(cv2.cv.CV_CAP_PROP_FPS, 1)
fps=cam.get(cv2.cv.CV_CAP_PROP_FPS)
print "Current FPS: ",fps
If you really want the median of the first 5 frames, then following should do what you are looking for:
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
frames = []
for _ in range(5):
ret, frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
frames.append(gray)
median = np.median(frames, axis=0).astype(dtype=np.uint8)
cv2.imshow('frame', median)
cv2.waitKey(0)
cap.release()
cv2.destroyAllWindows()
Note, this is just taking the source from a webcam as an example.