How do I set the window size of VideoCapture? - python

In the following snippet of code how would I set the size of the capture window? I want to take a 256*256 pixel picture.
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
ret, frame = cap.read()
cv2.imshow('img1',frame)
if cv2.waitKey(1) & 0xFF == ord('y'):
cv2.imwrite('imag.png',frame)
cv2.destroyAllWindows()
cap.release()

You can alternatively use OpenCVs resize function or slice your frame if you are just interested in a certain region of your captured image.

You can set the size of the window like so
video_capture = cv2.VideoCapture(0)
video_capture.set(cv2.CAP_PROP_FRAME_WIDTH, 256)
video_capture.set(cv2.CAP_PROP_FRAME_HEIGHT, 256)

Related

How to destroy video feed window and display another image whenever multiple faces are detected using mediapipe

Using OpenCV and Mediapipe to create an application where multiple faces in the live feed is not desired. Hence I require a solution to either destroy video feed and display an image until there is only 1 face in the frame(and display the feed again of course)
or Overlay an image on the entire video feed display window(hence hide the feed display)
Here's the code so far:
import cv2
import mediapipe as mp
import time
cap = cv2.VideoCapture(0)
face_detection = mp.solutions.face_detection.FaceDetection(0.8)
mul = cv2.imread('images.png')
mul = cv2.resize(mul, (640, 480))
while True:
ret, frame = cap.read()
frame = cv2.resize(frame, (640, 480))
results = face_detection.process(imgRGB)
for count, detection in enumerate(results.detections):
continue
count += 1
if count > 1:
cv2.destroyAllWindows()
cv2.imshow("output", mul)
time.sleep(10)
continue
cv2.imshow("output", frame)
cap.release()
cv2.destroyAllWindows()
I'm trying to destroy the feed and display. The introduced delay using time.sleep(10) is because without it the windows switch between the video feed and the image at a very high rate, making it hard to see what's happening.
The problem is that image is not being displayed, the window appears blank grey; and after 10 seconds the video feed comes back up and is taking very long to display the image again even though the multiple faces never leave the frame
Thank you
You observe gray frames because you are destroying the window every time loop starts over. You are stucking at the if count > 1: statement since count is being increased for each frame because it does not depend on any condition and never initialize again (so count always >1 after detecting 2 faces although faces are in different frames). Here is my solution to the problem hope it helps.
import cv2
import mediapipe as mp
import time
cap = cv2.VideoCapture(0)
face_detection = mp.solutions.face_detection.FaceDetection(0.8)
mul = cv2.imread('image.jpg')
mul = cv2.resize(mul, (640, 480))
count = 0
while True:
ret, frame = cap.read()
frame = cv2.resize(frame, (640, 480))
cv2.imshow("Original", frame)
imgRGB = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
results = face_detection.process(imgRGB)
if(results.detections):
for count, detection in enumerate(results.detections):
count += 1
print("Number of Faces: ", count)
if count > 1:
cv2.imshow('StackOverflow', mul)
else:
cv2.imshow('StackOverflow', frame)
count = 0
if cv2.waitKey(5) & 0xFF == 27:
break
cap.release()
cv2.destroyAllWindows()
Here is the result with one face in frame;
Result with multiple faces;

OpenCV Video not shown full size

My problem is opencv runs and show my video not in its original size (seems OpenCV shrinks the video little bit), effecting my region of interest.
How can OpenCV show the exact orginal size of my video? Below is my code and the difference of the video original size and what OpenCV display.
Code:
import cv2
cap = cv2.VideoCapture("video_link")
while True:
ret, frame = cap.read()
if not ret:
break
cv2.imshow("frame", frame)
key = cv2.waitKey(1)
if key == 27:
break
cap.release()
cv2.destroyWindow()
Original size:
OpenCV Video output size:
try this, good luck. Manually enter the height, and width of the camera feed.(edit: Dshow works only in WIN, if you have something else, use different solution)
import cv2
video="videolink"
height=1080 #for some reason, ANYTHING else works for my HD camera for example 1079..
width=1920
cap = cv2.VideoCapture(video, cv2.CAP_DSHOW)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)
while cap.isOpened():
success, image = cap.read()
if not success:
print("Ignoring empty camera frame.")
# If loading a video, use 'break' instead of 'continue'.
continue
cv2.imshow('CameraFeed', image )
if cv2.waitKey(5) & 0xFF == ord("q"):
break
cap.release()

How do I capture a single image from webcam and process it further in OpenCV?

I want my code to capture a single image from webcam and process it further, like later detect colours and sobel edges and a lot more.
In short, I want to do image acquisition.
To use your webcam, you can use VideoCapture:
import cv2
cap = cv2.VideoCapture(0) # use 0 if you only have front facing camera
ret, frame = cap.read() #read one frame
print(frame.shape)
cap.release() # release the VideoCapture object.
You start the webcam, read one image and immediately release it. The frame is the image and you can preprocess it however you want. You can view the image using imshow:
cv2.imshow('image', frame)
if cv2.waitKey(0) & 0xff == ord('q'): # press q to exit
cv2.destroyAllWindows()
import cv2
cap = cv2.VideoCapture(0) # Usually if u have only 1 camera, then it's 0, if u have multiple camera then it's may be 0,1,2 ...
ret, frame = cap.read() # ret is True or False status which shows if you are success reading frame from web cam, frame is an array
# If u want to loop to read continously
ret = True
while ret:
ret, frame = cap.read()
if frame is None:
continue # this will stop the loop if we failed to read frame, because ret will be False
If this is the answer that u wanted, then it have been asked multiple times. Make sure you have tried to search the answer before you asked
cam = cv2.VideoCapture(0)
image = cam.read()[1]
cv2.imshow("image", image)
cv2.waitKey(0)
cv2.destroyAllWindows()

Python - OpenCv - Set window resolution when open a video file

I want to display 3 videos at the same time like this:
the way i am opening 1 video right now in fullscreen is as follows:
import numpy as np
import cv2
cap = cv2.VideoCapture('C:\Users\NachoM\Videos\VTS_01_1.mp4')
while(cap.isOpened()):
ret, frame = cap.read()
cv2.namedWindow("window", cv2.WND_PROP_FULLSCREEN)
cv2.setWindowProperty("window", cv2.WND_PROP_FULLSCREEN,
cv2.WINDOW_FULLSCREEN)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imshow('window',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Is there any other parameter i have to pass to cv2 to position the window like in the picture above?
Thank you
One way of doing this is using:
cv2.moveWindow("WindowName", x, y)
More details can be found at (assuming OpenCV 2.4 is used): http://docs.opencv.org/2.4/modules/highgui/doc/user_interface.html#movewindow

cv2.BackgroundSubtractorMOG2 outputs videos with initial frame superimposed

How can I output the the video with initial frame superimposed using cv2.BackgroundSubtractorMOG2?
Here is my code:
import cv2
import numpy as np
cap = cv2.VideoCapture(0)
fgbg = cv2.BackgroundSubtractorMOG2()
fourcc = cv2.cv.CV_FOURCC(*'XVID')
out = cv2.VideoWriter('output.avi',fourcc, 20.0, (640,480), isColor = False)
while(cap.isOpened()):
ret, frame = cap.read()
if ret == True:
fgmask = fgbg.apply(frame)
cv2.imshow('original', frame)
cv2.imshow('fg', fgmask)
out.write(fgmask)
k = cv2.waitKey(30) & 0xFF
if k == 27:
break
else:
break
cap.release()
out.release()
cv2.destroyAllWindows()
This is what I get:
You can use the cv2.addWeighted function to achieve the superimpose effect
however, note that the MOG background subtractor continuously updates it's "idea" of background as new frames are added via the apply function (which is why it eventually removes objects that were originally in the fg mask, but has been stationary for too long), hence the superimposed "initial frame" may be misleading to yourself.
You might be better off superimposing the immediate previous frame / frames instead of the initial frame if you want to see the effect of the fg mask

Categories