I'm trying to add multiple cameras using OpenCV python but in my case, it is not working. I printed out the return then one is given true and another is false. But when I use each camera as single then it works fine. I tried to solve it but the solution doesn't come. I have attached the output to this post.
[Note] Python Version - 3.7.6 and OpenCV version - 4.4.0, usb cameras
import numpy as np
import cv2
video_capture_0 = cv2.VideoCapture(0, cv2.CAP_DSHOW)
video_capture_1 = cv2.VideoCapture(1, cv2.CAP_DSHOW)
while True:
# Capture frame-by-frame
ret0, frame0 = video_capture_0.read()
ret1, frame1 = video_capture_1.read()
print(ret0)
print(ret1)
cv2.imshow('Cam 1', frame1)
if (ret0):
# Display the resulting frame
cv2.imshow('Cam 0', frame0)
if (ret1):
# Display the resulting frame
cv2.imshow('Cam 1', frame1)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything is done, release the capture
video_capture_0.release()
video_capture_1.release()
cv2.destroyAllWindows()
Output Image
Related
I am new to OpenCV. I want to display video/webcam feed in OpenCV.I have written the following code
import cv2
cap = cv2.VideoCapture(0)
while True:
ret,img = cap.read()
cv2.imshow("Frame",img)
Instead of getting webcam feed or video, I get a black screen with no output as shown in the picture
You need to add cv.waitKey(1) or some other ms number as you wish. This will display a frame for 1 ms. Check the example here:
if cv2.waitKey(1) & 0xFF == ord('q'):
break
I have working code for applying background subtraction on a still video but it won't properly write the frames of subtracted background to its output file. I get the .avi file & filename which I specified in cv2.VideoWriter, but it doesn't seem to write each frame that I pass:
import cv2
import numpy as np
cap = cv2.VideoCapture('traffic-mini.mp4')
fgbg = cv2.createBackgroundSubtractorMOG2()
cv2.startWindowThread()
fourcc = cv2.VideoWriter_fourcc('M','J','P','G')
out = cv2.VideoWriter('test_output.avi',fourcc, 20.0, (640,480))
while True:
ret, frame = cap.read()
if ret == True:
frame = fgbg.apply(frame)
out.write(frame)
cv2.imshow('fg',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break
cap.release()
out.release()
cv2.destroyAllWindows()
for i in range (1,5):
cv2.waitKey(1)
The output video test_output.avi is always 6KB and has no frames passed in. What am I missing? Thanks in advance
Try this:
#Add a 0 to the end of the out after (640, 480)
out = cv2.VideoWriter('test_output.avi',fourcc, 20.0, (640,480),0)
while True:
ret, frame = cap.read()
if ret == True:
frame = cv2.resize(frame, (640,480))
frame = fgbg.apply(frame)
out.write(frame)
cv2.imshow('fg',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break
The reason why is to write out black and white frames you need the 0 at the end to tell opencv that there is no channel involved.
You may have to switch the two number for the resize around as I can remember off hand which is width or height of the frame, but the point is the video frame size should match for both your output and your input. Also a hint for background subtraction is to gray the video out as well like
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
It's because the size of frame is not (640,480). Instead of
out = cv2.VideoWriter('test_output.avi',fourcc, 20.0, (640,480))
try
out = cv2.VideoWriter('test_output.avi',fourcc, 20.0, (int(cap.get(3)), int(cap.get(4))))
MNM's proposed solution - adding 0 as a last parameter of VideoWriter - works well on my end - using OpenCV 3.4.5 on Raspbian Stretch (Raspberry Pi 3).
Although the official documentation https://docs.opencv.org/3.4.5/dd/d9e/classcv_1_1VideoWriter.html - states that "isColor If it is not zero, the encoder will expect and encode color frames, otherwise it will work with grayscale frames (the flag is currently supported on Windows only)." It may be applicable to other OSes.
I have recently bought a stereo camera through Amazon and I want to use it for depth mapping. The problem is that the output that I get from the camera is in the form of a single video with the output of both the cameras.
What I want is two seprate outputs from the single usb port if it is possible.I can use cropping but I dont want to use that because i am trying to reduce the processing time and I want the outputs sepratley.
The obove image was generated from the following code
import numpy as np
import cv2
cam = cv2. VideoCapture(1)
cam.set(cv2.CAP_PROP_FPS, 120)
cam.set(cv2.CAP_PROP_FRAME_WIDTH, 1280)
cam.set(cv2.CAP_PROP_FRAME_HEIGHT, 720)
while(1):
s,orignal = cam.read()
cv2.imshow('original',orignal)
if cv2.waitKey(1) & 0xFF == ord('w'):
break
cam.release()
cv2.destroyAllWindows()
I have also tried other techniques such as:
import numpy as np
import cv2
left = cv2.VideoCapture(1)
right = cv2.VideoCapture(2)
left.set(cv2.CAP_PROP_FRAME_WIDTH, 720)
left.set(cv2.CAP_PROP_FRAME_HEIGHT, 720)
right.set(cv2.CAP_PROP_FRAME_WIDTH, 1280)
right.set(cv2.CAP_PROP_FRAME_HEIGHT, 720)
left.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(*"MJPG"))
right.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(*"MJPG"))
# Grab both frames first, then retrieve to minimize latency between cameras
while(True):
_, leftFrame = left.retrieve()
leftWidth, leftHeight = leftFrame.shape[:2]
_, rightFrame = right.retrieve()
rightWidth, rightHeight = rightFrame.shape[:2]
# TODO: Calibrate the cameras and correct the images
cv2.imshow('left', leftFrame)
cv2.imshow('right', rightFrame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
left.release()
right.release()
cv2.destroyAllWindows()
but they are not recognising the 3rd camera any help would be nice.
My openCV version is 3.4
P.S If anyone can present a soloution in c++ it would also work for me
Ok so after analysing the problem I figured that the best way would be to crop the images in half as it saves processing time. If you have two different image sources then your pipeline time is doubled for getting these images. After testing the stereo camera using cropping and without cropping I saw no noticeable change in the FPS. Here is a simple code for cropping the video and displaying it in two different windows.
import numpy as np
import cv2
cam = cv2. VideoCapture(1)
cam.set(cv2.CAP_PROP_FPS, 120)
cam.set(cv2.CAP_PROP_FRAME_WIDTH, 1280)
cam.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
s,orignal = cam.read()
height, width, channels = orignal.shape
print(width)
print(height)
while(1):
s,orignal = cam.read()
left=orignal[0:height,0:int(width/2)]
right=orignal[0:height,int(width/2):(width)]
cv2.imshow('left',left)
cv2.imshow('Right',right)
if cv2.waitKey(1) & 0xFF == ord('w'):
break
cam.release()
cv2.destroyAllWindows()
[
I'm trying to run a real-time Capture of 2 Usb-Cameras (same Reference)
USB-Camera_0 --> USB-port 0
USB-Camera_1 --> USB-port 1
I use This Code to make a real time capture for only one camera. To run the second camera (connected to the second usb-Port) at the same time I create another file with the same Code (changing the index 0-->1). I'm asking if it is possible to run this two real time capture in the same Code-file.
thanks
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Display the resulting frame
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
I'm trying to use MOG background subtraction but the "history" function doesn't seem to work.
OpenCV 2.4.13
Python (2.7.6)
Observation: The program appears to use the very first frame it captures for all future background subtractions.
Expectation: The background should slowly evolve based on the "history" parameter, so that if the camera angle changes, or if a person/object leaves the field-of-view, the "background" image will change accordingly.
import cv2
import numpy as np
cap = cv2.VideoCapture(0)
mog = cv2.BackgroundSubtractorMOG(history=10, nmixtures=5, backgroundRatio=0.25)
while True:
ret, img1 = cap.read()
if ret is True:
cv2.imshow("original", img1)
img_mog = mog.apply(img1)
cv2.imshow("mog", img_mog)
if cv2.waitKey(10) & 0xFF == ord(q):
break
video_capture.release()
cv2.destroyAllWindows()
Thank you in advance for your help!