Reduced camera resolution but higher display window in python open cv - python

I am working on a project which requires to do face detection on raspberry pi. I have a USB camera to do this. The frame rate was apparently very slow. So, I scaled down the capture resolution using VideoCapture.set(). This decreased the resolution to 320, 214 as I set it. This increased the capture frame rate considerably but it the feed in displayed the feed on a window on 320 X 214. I want to keep the same capture resolution but I want higher size display window. I am just a beginner to python and open cv. Please help me do it. Below is the code I wrote for just a simple camera feed.
import numpy as np
import cv2
import time
cap = cv2.VideoCapture(-1)
cap.set(3, 320) #width
cap.set(4, 216) #height
cap.set(5, 15) #frame rate
time.sleep(2)
while(cap.isOpened()):
ret, frame = cap.read()
cv2.imshow("captured video", frame)
if cv2.waitKey(33) == ord('q'):
break
cap.release()
cv2.destroyAllWindows()

If I understand you correctly, you want the display image to be a scaled up version of the original. If so, you just need cv2.resize
display_scale = 4
height, width = frame.shape[0:2]
height_display, width_display = display_scale * height, display_scale * width
# you can choose different interpolation methods
frame_display = cv2.resize(frame, (display_width, display_height),
interpolation=cv2.INTER_CUBIC)
cv2.imshow("captured video", frame_display)

Related

opencv cv2.CAP_PROP_FPS cuts out parts of my image

I'm using opencv in python3 on a raspberry pi compute model 4 with two cameras. I want to capture the video from the camera with 90 fps. That works fine, but for whatever reason opencv decides to cut a lot from my image when I my desired fps exceeds 40, which is a problem for me.
A captured video from a camera with 40 fps (or lower)
The same camera with 41 fps (or higher)
Here is my code, right now it is just supposed to show a window with the video of the camera:
import cv2
import time
cam = cv2.VideoCapture(1)
cam.set(cv2.CAP_PROP_FPS, 41)
width = 640
height = 480
cam.set(cv2.CAP_PROP_FRAME_WIDTH, width)
cam.set(cv2.CAP_PROP_FRAME_HEIGHT, height)
while True:
check, frame = cam.read()
cv2.imshow('video', frame)
key = cv2.waitKey(1)
if key == 27:
break
cam.release()
cv2.destroyAllWindows()
I didn't find any useful options in the opencv documentation, so I hope you can help me.
I tried to lower the resolution but that didn't change anything.

Background substractor python opencv ( remove granulation )

Hello in using MOG2 to make a Background substrator from a base frame to a next frames.
but its showing me to much ruid
id like if there is another background substractor that can elimitate this ponts.
Also i have another problem.
When a car passes with flash lights on the flashlights is showed as white im mi image . i need to ignorate the reflexion of fleshlight in the ground.
Some one knows dow to do that ?
by cod for BGS:
backSub = cv2.createBackgroundSubtractorMOG2(history=1, varThreshold=150, detectShadows=True)
fgMask = backSub.apply(frame1)
fgMask2 = backSub.apply(actualframe)
maskedFrame = fgMask2 - fgMask
cv2.imshow("maskedFrame1 "+str(id), maskedFrame)
You can try to perform a Gaussian blur before sending the frame to backSub.apply() or experiment with the parameters for cv2.createBackgroundSubtractorMOG2(): if you need a better explanation of what they do, try this page.
This is the result from a 7x7 Gaussian blur using this video.
Code:
import cv2
import numpy as np
import sys
# read input video
cap = cv2.VideoCapture('traffic.mp4')
if (cap.isOpened()== False):
print("!!! Failed to open video")
sys.exit(-1)
# retrieve input video frame size
frame_width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
frame_height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = cap.get(cv2.CAP_PROP_FPS)
print('* Input Video settings:', frame_width, 'x', frame_height, '#', fps)
# adjust output video size
frame_height = int(frame_height / 2)
print('* Output Video settings:', frame_width, 'x', frame_height, '#', fps)
# create output video
video_out = cv2.VideoWriter('traffic_out.mp4', cv2.VideoWriter_fourcc(*'MP4V'), fps, (frame_width, frame_height))
#video_out = cv2.VideoWriter('traffic_out.avi', cv2.VideoWriter_fourcc('M','J','P','G'), fps, (frame_width, frame_height), True)
# create MOG
backSub = cv2.createBackgroundSubtractorMOG2(history=5, varThreshold=60, detectShadows=True)
while (True):
# retrieve frame from the video
ret, frame = cap.read() # 3-channels
if (frame is None):
break
# resize to 50% of its original size
frame = cv2.resize(frame, None, fx=0.5, fy=0.5)
# gaussian blur helps to remove noise
blur = cv2.GaussianBlur(frame, (7,7), 0)
#cv2.imshow('frame_blur', blur)
# subtract background
fgmask = backSub.apply(blur) # single channel
#cv2.imshow('fgmask', fgmask)
# concatenate both frames horizontally and write it as output
fgmask_bgr = cv2.cvtColor(fgmask, cv2.COLOR_GRAY2BGR) # convert single channel image to 3-channels
out_frame = cv2.hconcat([blur, fgmask_bgr]) #
#print('output=', out_frame.shape) # shape=(360, 1280, 3)
cv2.imshow('output', out_frame)
video_out.write(out_frame)
# quick pause to display the windows
if (cv2.waitKey(1) == 27):
break
# release resources
cap.release()
video_out.release()
cv2.destroyAllWindows()
You can use SuBSENSE: A Universal Change Detection Method With Local Adaptive Sensitivity https://ieeexplore.ieee.org/document/6975239.
BackgroundSubtractionSuBSENSE bgs(/*...*/);
bgs.initialize(/*...*/);
for(/*all frames in the video*/) {
//...
bgs(input,output);
//...
}
You can find the complete implementation at
https://bitbucket.org/pierre_luc_st_charles/subsense/src/master/
Plus I don't know the scale of your work, and your requirements. But Murari Mandal composed a very informative repository on GitHub comprising list of resources related to background subtraction, which can solve the above mentioned problems.
https://github.com/murari023/awesome-background-subtraction

Video capture using Opencv through single port multi-head (stereo) usb camera giving single output

I have recently bought a stereo camera through Amazon and I want to use it for depth mapping. The problem is that the output that I get from the camera is in the form of a single video with the output of both the cameras.
What I want is two seprate outputs from the single usb port if it is possible.I can use cropping but I dont want to use that because i am trying to reduce the processing time and I want the outputs sepratley.
The obove image was generated from the following code
import numpy as np
import cv2
cam = cv2. VideoCapture(1)
cam.set(cv2.CAP_PROP_FPS, 120)
cam.set(cv2.CAP_PROP_FRAME_WIDTH, 1280)
cam.set(cv2.CAP_PROP_FRAME_HEIGHT, 720)
while(1):
s,orignal = cam.read()
cv2.imshow('original',orignal)
if cv2.waitKey(1) & 0xFF == ord('w'):
break
cam.release()
cv2.destroyAllWindows()
I have also tried other techniques such as:
import numpy as np
import cv2
left = cv2.VideoCapture(1)
right = cv2.VideoCapture(2)
left.set(cv2.CAP_PROP_FRAME_WIDTH, 720)
left.set(cv2.CAP_PROP_FRAME_HEIGHT, 720)
right.set(cv2.CAP_PROP_FRAME_WIDTH, 1280)
right.set(cv2.CAP_PROP_FRAME_HEIGHT, 720)
left.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(*"MJPG"))
right.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(*"MJPG"))
# Grab both frames first, then retrieve to minimize latency between cameras
while(True):
_, leftFrame = left.retrieve()
leftWidth, leftHeight = leftFrame.shape[:2]
_, rightFrame = right.retrieve()
rightWidth, rightHeight = rightFrame.shape[:2]
# TODO: Calibrate the cameras and correct the images
cv2.imshow('left', leftFrame)
cv2.imshow('right', rightFrame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
left.release()
right.release()
cv2.destroyAllWindows()
but they are not recognising the 3rd camera any help would be nice.
My openCV version is 3.4
P.S If anyone can present a soloution in c++ it would also work for me
Ok so after analysing the problem I figured that the best way would be to crop the images in half as it saves processing time. If you have two different image sources then your pipeline time is doubled for getting these images. After testing the stereo camera using cropping and without cropping I saw no noticeable change in the FPS. Here is a simple code for cropping the video and displaying it in two different windows.
import numpy as np
import cv2
cam = cv2. VideoCapture(1)
cam.set(cv2.CAP_PROP_FPS, 120)
cam.set(cv2.CAP_PROP_FRAME_WIDTH, 1280)
cam.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
s,orignal = cam.read()
height, width, channels = orignal.shape
print(width)
print(height)
while(1):
s,orignal = cam.read()
left=orignal[0:height,0:int(width/2)]
right=orignal[0:height,int(width/2):(width)]
cv2.imshow('left',left)
cv2.imshow('Right',right)
if cv2.waitKey(1) & 0xFF == ord('w'):
break
cam.release()
cv2.destroyAllWindows()
[

How do i take median of 5 grayscale frames?

I am taking input from a video and I want to take the median value of the first 5 frames so that I can use it as background image for motion detection using deferential.
Also, I want to use a time condition that, say if motion is not detected then calculate the background again, else wait t seconds. I am new to opencv and I don't know how to do it.. Please help
Also, I want to take my video in 1 fps but this does not work. Here is the code I have:
import cv2
BLUR_SIZE = 3
NOISE_CUTOFF = 12
cam = cv2.VideoCapture('gh10fps.mp4')
cam.set(3, 640)
cam.set(4, 480)
cam.set(cv2.cv.CV_CAP_PROP_FPS, 1)
fps=cam.get(cv2.cv.CV_CAP_PROP_FPS)
print "Current FPS: ",fps
If you really want the median of the first 5 frames, then following should do what you are looking for:
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
frames = []
for _ in range(5):
ret, frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
frames.append(gray)
median = np.median(frames, axis=0).astype(dtype=np.uint8)
cv2.imshow('frame', median)
cv2.waitKey(0)
cap.release()
cv2.destroyAllWindows()
Note, this is just taking the source from a webcam as an example.

How can I test the actual resolution of my camera when I acquire a frame using OpenCV?

I am working in Python/OpenCV, acquiring frames from a USB webcam (Logitech C615 Camera, supposedly HD 1080p). 1080p has a 16:9 aspect ratio and thus I should be able to acquire images at all of these resolutions:
1920 x 1080
1600 x 900
1366 x 768
1280 x 720
1024 x 576
I didn't write the camera driver however, so how do I know if I am really getting these pixels off of the camera? For example, I can specify 3840 x 2160 and I get a video frame of that size!
Is there a systematic way I can evaluate/determine the real resolution or effective resolution of the camera given these different resolution settings? Below is some Python/OpenCV code to demonstrate.
import numpy as np
import cv2, cv
import time
cap = cv2.VideoCapture(0) # note you may need to pass 1 instead of 0 into this to get your camera
cap.set(3,3840) #horizontal pixels
cap.set(4,2160) #vertical pixels
cap.set(5, 15) #frame rate
time.sleep(2) #trying to solve a delay issue ... never mind this
#acquire the video from the camera
while(cap.isOpened()):
ret, frame = cap.read()
cv2.imshow("captured video", frame)
if cv2.waitKey(33) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
import cv2
cam = cv2.VideoCapture(0)
w = cam.get(cv2.CAP_PROP_FRAME_WIDTH)
h = cam.get(cv2.CAP_PROP_FRAME_HEIGHT)
print(w, h)
while cam.isOpened():
err, img = cam.read()
cv2.imshow("lalala", img)
k = cv2.waitKey(10) & 0xff
if k == 27:
break

Categories