OpenCV Video not shown full size - python

My problem is opencv runs and show my video not in its original size (seems OpenCV shrinks the video little bit), effecting my region of interest.
How can OpenCV show the exact orginal size of my video? Below is my code and the difference of the video original size and what OpenCV display.
Code:
import cv2
cap = cv2.VideoCapture("video_link")
while True:
ret, frame = cap.read()
if not ret:
break
cv2.imshow("frame", frame)
key = cv2.waitKey(1)
if key == 27:
break
cap.release()
cv2.destroyWindow()
Original size:
OpenCV Video output size:

try this, good luck. Manually enter the height, and width of the camera feed.(edit: Dshow works only in WIN, if you have something else, use different solution)
import cv2
video="videolink"
height=1080 #for some reason, ANYTHING else works for my HD camera for example 1079..
width=1920
cap = cv2.VideoCapture(video, cv2.CAP_DSHOW)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)
while cap.isOpened():
success, image = cap.read()
if not success:
print("Ignoring empty camera frame.")
# If loading a video, use 'break' instead of 'continue'.
continue
cv2.imshow('CameraFeed', image )
if cv2.waitKey(5) & 0xFF == ord("q"):
break
cap.release()

Related

How to destroy video feed window and display another image whenever multiple faces are detected using mediapipe

Using OpenCV and Mediapipe to create an application where multiple faces in the live feed is not desired. Hence I require a solution to either destroy video feed and display an image until there is only 1 face in the frame(and display the feed again of course)
or Overlay an image on the entire video feed display window(hence hide the feed display)
Here's the code so far:
import cv2
import mediapipe as mp
import time
cap = cv2.VideoCapture(0)
face_detection = mp.solutions.face_detection.FaceDetection(0.8)
mul = cv2.imread('images.png')
mul = cv2.resize(mul, (640, 480))
while True:
ret, frame = cap.read()
frame = cv2.resize(frame, (640, 480))
results = face_detection.process(imgRGB)
for count, detection in enumerate(results.detections):
continue
count += 1
if count > 1:
cv2.destroyAllWindows()
cv2.imshow("output", mul)
time.sleep(10)
continue
cv2.imshow("output", frame)
cap.release()
cv2.destroyAllWindows()
I'm trying to destroy the feed and display. The introduced delay using time.sleep(10) is because without it the windows switch between the video feed and the image at a very high rate, making it hard to see what's happening.
The problem is that image is not being displayed, the window appears blank grey; and after 10 seconds the video feed comes back up and is taking very long to display the image again even though the multiple faces never leave the frame
Thank you
You observe gray frames because you are destroying the window every time loop starts over. You are stucking at the if count > 1: statement since count is being increased for each frame because it does not depend on any condition and never initialize again (so count always >1 after detecting 2 faces although faces are in different frames). Here is my solution to the problem hope it helps.
import cv2
import mediapipe as mp
import time
cap = cv2.VideoCapture(0)
face_detection = mp.solutions.face_detection.FaceDetection(0.8)
mul = cv2.imread('image.jpg')
mul = cv2.resize(mul, (640, 480))
count = 0
while True:
ret, frame = cap.read()
frame = cv2.resize(frame, (640, 480))
cv2.imshow("Original", frame)
imgRGB = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
results = face_detection.process(imgRGB)
if(results.detections):
for count, detection in enumerate(results.detections):
count += 1
print("Number of Faces: ", count)
if count > 1:
cv2.imshow('StackOverflow', mul)
else:
cv2.imshow('StackOverflow', frame)
count = 0
if cv2.waitKey(5) & 0xFF == 27:
break
cap.release()
cv2.destroyAllWindows()
Here is the result with one face in frame;
Result with multiple faces;

How do I capture a single image from webcam and process it further in OpenCV?

I want my code to capture a single image from webcam and process it further, like later detect colours and sobel edges and a lot more.
In short, I want to do image acquisition.
To use your webcam, you can use VideoCapture:
import cv2
cap = cv2.VideoCapture(0) # use 0 if you only have front facing camera
ret, frame = cap.read() #read one frame
print(frame.shape)
cap.release() # release the VideoCapture object.
You start the webcam, read one image and immediately release it. The frame is the image and you can preprocess it however you want. You can view the image using imshow:
cv2.imshow('image', frame)
if cv2.waitKey(0) & 0xff == ord('q'): # press q to exit
cv2.destroyAllWindows()
import cv2
cap = cv2.VideoCapture(0) # Usually if u have only 1 camera, then it's 0, if u have multiple camera then it's may be 0,1,2 ...
ret, frame = cap.read() # ret is True or False status which shows if you are success reading frame from web cam, frame is an array
# If u want to loop to read continously
ret = True
while ret:
ret, frame = cap.read()
if frame is None:
continue # this will stop the loop if we failed to read frame, because ret will be False
If this is the answer that u wanted, then it have been asked multiple times. Make sure you have tried to search the answer before you asked
cam = cv2.VideoCapture(0)
image = cam.read()[1]
cv2.imshow("image", image)
cv2.waitKey(0)
cv2.destroyAllWindows()

Python - OpenCv - Set window resolution when open a video file

I want to display 3 videos at the same time like this:
the way i am opening 1 video right now in fullscreen is as follows:
import numpy as np
import cv2
cap = cv2.VideoCapture('C:\Users\NachoM\Videos\VTS_01_1.mp4')
while(cap.isOpened()):
ret, frame = cap.read()
cv2.namedWindow("window", cv2.WND_PROP_FULLSCREEN)
cv2.setWindowProperty("window", cv2.WND_PROP_FULLSCREEN,
cv2.WINDOW_FULLSCREEN)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imshow('window',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
Is there any other parameter i have to pass to cv2 to position the window like in the picture above?
Thank you
One way of doing this is using:
cv2.moveWindow("WindowName", x, y)
More details can be found at (assuming OpenCV 2.4 is used): http://docs.opencv.org/2.4/modules/highgui/doc/user_interface.html#movewindow

How to save masks of videos in openCV2 python

I can capture a video from the webcam and save it fine with this code
cap = cv2.VideoCapture(0)
fgbg= cv2.BackgroundSubtractorMOG()
fourcc = cv2.cv.CV_FOURCC(*'DIVX')
out = cv2.VideoWriter('output.avi', fourcc, 20.0, (640,480))
while(cap.isOpened()):
ret,frame = cap.read()
if ret:
fgmask = fgbg.apply(frame)
out.write(frame) #Save it
cv2.imshow('Background Subtraction', fgmask)
if cv2.waitKey(1) & 0xFF == ord('q'):
break #q to quit
else:
break #EOF
cap.release()
out.release()
cv2.destroyAllWindows()
This records it as one would expect, and shows the background subtraction thing also. It saves it to output.avi. All is well. But I can't save the foreground mask, it gives me a Could not demultiplex stream error. (This line is changed in the code above).
out.write(fgmask) #Save it
Why is this? Is the fgmask not a frame like when I am reading from the capture?
Alright figured it out! Let me know if there's a more efficient way to do this or if I am missing something..
The foreground mask generated in background subtraction is an 8bit binary image, so we have to convert it to a different format. Probably a better one exists, but I used RGB
frame = cv2.cvtColor(fgmask, cv2.COLOR_GRAY2RGB)

OpenCV camera just showing black output

I am following these tutorials http://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_gui/py_video_display/py_video_display.html#display-video and trying to capture video from camera. Although the code functions properly, but it doesn't show anything except black screen on the frame window. I don't know why is this happening.
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
ret, frame = cap.read()
if cap.isOpened() == 0:
cap.open(0)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
I have read in some posts that OpenCV might not be supporting my camera. So i don't know what is happening exactly?

Categories