I'm trying to run a real-time Capture of 2 Usb-Cameras (same Reference)
USB-Camera_0 --> USB-port 0
USB-Camera_1 --> USB-port 1
I use This Code to make a real time capture for only one camera. To run the second camera (connected to the second usb-Port) at the same time I create another file with the same Code (changing the index 0-->1). I'm asking if it is possible to run this two real time capture in the same Code-file.
thanks
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Display the resulting frame
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
Related
I'm trying to add multiple cameras using OpenCV python but in my case, it is not working. I printed out the return then one is given true and another is false. But when I use each camera as single then it works fine. I tried to solve it but the solution doesn't come. I have attached the output to this post.
[Note] Python Version - 3.7.6 and OpenCV version - 4.4.0, usb cameras
import numpy as np
import cv2
video_capture_0 = cv2.VideoCapture(0, cv2.CAP_DSHOW)
video_capture_1 = cv2.VideoCapture(1, cv2.CAP_DSHOW)
while True:
# Capture frame-by-frame
ret0, frame0 = video_capture_0.read()
ret1, frame1 = video_capture_1.read()
print(ret0)
print(ret1)
cv2.imshow('Cam 1', frame1)
if (ret0):
# Display the resulting frame
cv2.imshow('Cam 0', frame0)
if (ret1):
# Display the resulting frame
cv2.imshow('Cam 1', frame1)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything is done, release the capture
video_capture_0.release()
video_capture_1.release()
cv2.destroyAllWindows()
Output Image
I am new to OpenCV. I want to display video/webcam feed in OpenCV.I have written the following code
import cv2
cap = cv2.VideoCapture(0)
while True:
ret,img = cap.read()
cv2.imshow("Frame",img)
Instead of getting webcam feed or video, I get a black screen with no output as shown in the picture
You need to add cv.waitKey(1) or some other ms number as you wish. This will display a frame for 1 ms. Check the example here:
if cv2.waitKey(1) & 0xFF == ord('q'):
break
I am using OpenCV4 along with python 3 to open a webcam, grab the frames and display them in a window, just like the first code tutorial provided here. However, it takes a different amount of time grabbing different frames: sometimes it takes 0.01 s to grab, and sometimes it takes 0.33 s, which creates lags when showing the frames in the window.
Is there a way to force a constant time when grabbing frames so that i can see the video without lag? I think it is happening with OpenCV because when i use a default windows camera viewer to see the video it displays it normally.
What i already tried is wait for some time using time.sleep() before grabbing a frame again. But this does not help.
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Display the resulting frame
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
One potential way is to set a timestamp within the loop and keep track of the time the last frame was shown. For instance, only once a certain amount of time has elapsed then you show the frame. At the same time, you constantly read frames to keep the buffer empty to ensure that you have the most recent frame. You don't want to use time.sleep() because it will freeze the program and not keep the buffer empty. Once the timestamp hits, you show the frame and reset the timestamp.
import cv2
import time
cap = cv2.VideoCapture(0)
# Timeout to display frames in seconds
# FPS = 1/TIMEOUT
# So 1/.025 = 40 FPS
TIMEOUT = .025
old_timestamp = time.time()
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
if (time.time() - old_timestamp) > TIMEOUT:
# Display the resulting frame
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
old_timestamp = time.time()
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
When working correctly, this app will be suspended at the read() call until the next frame from the streaming webcam is available. Smooth display depends on being able to execute whatever you may have added to the loop in less than the 1/FPS seconds. It also depends on the camera being UVC compliant and it may depend on the encoding algorithm being MJPEG, which is the case for most webcams. However the fact that you see delay up to 1/3 second is curious because that is a typical GOP period for mpeg or other inter-frame encoders.
If none of the above applies to your case then I suspect the problem is platform related rather than an OCV issue. Have you tried to duplicate the problem on another system?
I was facing a similar problem, and this is the solution I came up with. This would be the exact way to set a constant fps. This works on both live video and recorded video.
import cv2
import time
cap = cv2.VideoCapture('your video location')
initial_time = time.time()
to_time = time.time()
set_fps = 25 # Set your desired frame rate
# Variables Used to Calculate FPS
prev_frame_time = 0 # Variables Used to Calculate FPS
new_frame_time = 0
while True:
while_running = time.time() # Keep updating time with each frame
new_time = while_running - initial_time # If time taken is 1/fps, then read a frame
if new_time >= 1 / set_fps:
ret, frame = cap.read()
if ret:
# Calculating True FPS
new_frame_time = time.time()
fps = 1 / (new_frame_time - prev_frame_time)
prev_frame_time = new_frame_time
fps = int(fps)
fps = str(fps)
print(fps)
cv2.imshow('joined', frame)
initial_time = while_running # Update the initial time with current time
else:
total_time_of_video = while_running - to_time # To get the total time of the video
print(total_time_of_video)
break
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
I am trying to write a openCV program where i am breaking down the video into frames and comparing two frames one after the other if both are the same i reject the frame else append the frame to a output file.
How can i achieve it?
OpenCV 2.4.13 Python 2.7
The following example captures frames from the first camera connected to your system, compares each frame to the previous frame, and when different, the frame is added to a file. If you sit still in front of the camera, you might see the diagnostic 'no change' message printed if you run the program from a console terminal window.
There are a number of ways to measure how different one frame is from another. For simplicity we have used the average difference, pixel by pixel, between the new frame and the previous frame, compared to a threshold.
Note that frames are returned as numpy arrays by the openCV read function.
import numpy as np
import cv2
interval = 100
fps = 1000./interval
camnum = 0
outfilename = 'temp.avi'
threshold=100.
cap = cv2.VideoCapture(camnum)
ret, frame = cap.read()
height, width, nchannels = frame.shape
fourcc = cv2.VideoWriter_fourcc(*'MJPG')
out = cv2.VideoWriter( outfilename,fourcc, fps, (width,height))
while(True):
# previous frame
frame0 = frame
# new frame
ret, frame = cap.read()
if not ret:
break
# how different is it?
if np.sum( np.absolute(frame-frame0) )/np.size(frame) > threshold:
out.write( frame )
else:
print( 'no change' )
# show it
cv2.imshow('Type "q" to close',frame)
# check for keystroke
key = cv2.waitKey(interval) & 0xFF
# exit if so-commanded
if key == ord('q'):
print('received key q' )
break
# When everything done, release the capture
cap.release()
out.release()
print('VideoDemo - exit' )
I'm trying to use MOG background subtraction but the "history" function doesn't seem to work.
OpenCV 2.4.13
Python (2.7.6)
Observation: The program appears to use the very first frame it captures for all future background subtractions.
Expectation: The background should slowly evolve based on the "history" parameter, so that if the camera angle changes, or if a person/object leaves the field-of-view, the "background" image will change accordingly.
import cv2
import numpy as np
cap = cv2.VideoCapture(0)
mog = cv2.BackgroundSubtractorMOG(history=10, nmixtures=5, backgroundRatio=0.25)
while True:
ret, img1 = cap.read()
if ret is True:
cv2.imshow("original", img1)
img_mog = mog.apply(img1)
cv2.imshow("mog", img_mog)
if cv2.waitKey(10) & 0xFF == ord(q):
break
video_capture.release()
cv2.destroyAllWindows()
Thank you in advance for your help!