I'm using opencv in python3 on a raspberry pi compute model 4 with two cameras. I want to capture the video from the camera with 90 fps. That works fine, but for whatever reason opencv decides to cut a lot from my image when I my desired fps exceeds 40, which is a problem for me.
A captured video from a camera with 40 fps (or lower)
The same camera with 41 fps (or higher)
Here is my code, right now it is just supposed to show a window with the video of the camera:
import cv2
import time
cam = cv2.VideoCapture(1)
cam.set(cv2.CAP_PROP_FPS, 41)
width = 640
height = 480
cam.set(cv2.CAP_PROP_FRAME_WIDTH, width)
cam.set(cv2.CAP_PROP_FRAME_HEIGHT, height)
while True:
check, frame = cam.read()
cv2.imshow('video', frame)
key = cv2.waitKey(1)
if key == 27:
break
cam.release()
cv2.destroyAllWindows()
I didn't find any useful options in the opencv documentation, so I hope you can help me.
I tried to lower the resolution but that didn't change anything.
Related
I could record the screen, but whenever I play the video it is very fast. How can I solve this issue?
import pyautogui
import cv2
import numpy as np
resolution = (1920, 1080)
codec = cv2.VideoWriter_fourcc(*"XVID")
filename = "Recording.avi"
fps = 60.0
out = cv2.VideoWriter(filename, codec, fps, resolution)
cv2.namedWindow("Live", cv2.WINDOW_NORMAL)
cv2.resizeWindow("Live", 480, 270)
while True:
img = pyautogui.screenshot()
frame = np.array(img)
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
out.write(frame)
cv2.imshow('Live', frame)
if cv2.waitKey(1) == ord('q'):
break
time.sleep(1/30)
out.release()
cv2.destroyAllWindows()
There are a few things you can try to make the recorded video play at a normal speed. One possible solution is to reduce the number of frames per second (fps) that are being recorded. In your code, you are setting the fps value to 60.0, which is a very high value and may be causing the recorded video to play back too quickly. Try set fps to 25 or 30. Also you can try increasing the amount of time that the sleep() function is called, which will cause the loop to pause for a longer period of time between frames.
I have recently bought a stereo camera through Amazon and I want to use it for depth mapping. The problem is that the output that I get from the camera is in the form of a single video with the output of both the cameras.
What I want is two seprate outputs from the single usb port if it is possible.I can use cropping but I dont want to use that because i am trying to reduce the processing time and I want the outputs sepratley.
The obove image was generated from the following code
import numpy as np
import cv2
cam = cv2. VideoCapture(1)
cam.set(cv2.CAP_PROP_FPS, 120)
cam.set(cv2.CAP_PROP_FRAME_WIDTH, 1280)
cam.set(cv2.CAP_PROP_FRAME_HEIGHT, 720)
while(1):
s,orignal = cam.read()
cv2.imshow('original',orignal)
if cv2.waitKey(1) & 0xFF == ord('w'):
break
cam.release()
cv2.destroyAllWindows()
I have also tried other techniques such as:
import numpy as np
import cv2
left = cv2.VideoCapture(1)
right = cv2.VideoCapture(2)
left.set(cv2.CAP_PROP_FRAME_WIDTH, 720)
left.set(cv2.CAP_PROP_FRAME_HEIGHT, 720)
right.set(cv2.CAP_PROP_FRAME_WIDTH, 1280)
right.set(cv2.CAP_PROP_FRAME_HEIGHT, 720)
left.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(*"MJPG"))
right.set(cv2.CAP_PROP_FOURCC, cv2.VideoWriter_fourcc(*"MJPG"))
# Grab both frames first, then retrieve to minimize latency between cameras
while(True):
_, leftFrame = left.retrieve()
leftWidth, leftHeight = leftFrame.shape[:2]
_, rightFrame = right.retrieve()
rightWidth, rightHeight = rightFrame.shape[:2]
# TODO: Calibrate the cameras and correct the images
cv2.imshow('left', leftFrame)
cv2.imshow('right', rightFrame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
left.release()
right.release()
cv2.destroyAllWindows()
but they are not recognising the 3rd camera any help would be nice.
My openCV version is 3.4
P.S If anyone can present a soloution in c++ it would also work for me
Ok so after analysing the problem I figured that the best way would be to crop the images in half as it saves processing time. If you have two different image sources then your pipeline time is doubled for getting these images. After testing the stereo camera using cropping and without cropping I saw no noticeable change in the FPS. Here is a simple code for cropping the video and displaying it in two different windows.
import numpy as np
import cv2
cam = cv2. VideoCapture(1)
cam.set(cv2.CAP_PROP_FPS, 120)
cam.set(cv2.CAP_PROP_FRAME_WIDTH, 1280)
cam.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
s,orignal = cam.read()
height, width, channels = orignal.shape
print(width)
print(height)
while(1):
s,orignal = cam.read()
left=orignal[0:height,0:int(width/2)]
right=orignal[0:height,int(width/2):(width)]
cv2.imshow('left',left)
cv2.imshow('Right',right)
if cv2.waitKey(1) & 0xFF == ord('w'):
break
cam.release()
cv2.destroyAllWindows()
[
As a task for school our group has to create an application that knows when a goal is scored. This means that a ball shaped object passes a line.
First we are attempting to input a video, get OpenCV to track the ball, and then to output it as a video.
I have put a bunch of other code snippets together that I have found on StackOverflow, but it doesn't work.
I am creating a new post because all the other related threads are either C++ or use colour detection instead of the shape detection that we use. I also can't find a clear answer on outputting the video file when it was turned into a series of images.
Following is the code that I have so far:
import cv2
import numpy as np
cap = cv2.VideoCapture('bal.mp4')
fourcc = cv2.VideoWriter_fourcc(*'XVID')
out = cv2.VideoWriter('output1.avi',fourcc, 20.0, (640,480))
while(1):
# Take each frame
ret, frame = cap.read()
if ret == True:
if ret == 0:
break
frame = cv2.medianBlur(frame,5)
cimg = cv2.cvtColor(frame,cv2.COLOR_BGR2GRAY)
circles = cv2.HoughCircles(cimg,3,1,20,param1=50,param2=30,minRadi$
if circles == None:
print "NoneType"
break
circles = np.uint16(np.around(circles.astype(np.double),3))
for i in circles[0,:]:
# draw the outer circle
cv2.circle(cimg,(i[0],i[1]),i[2],(0,255,0),2)
# draw the center of the circle
cv2.circle(cimg,(i[0],i[1]),2,(0,0,255),3)
cv2.imwrite('test.jpg',cimg)
out.write(cimg)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
out.release()
cv2.destroyAllWindows()
We get working images, but the video is unplayable with VLC or any other media player.
This is an image from the program:
This issue is turning it into a playable video now.
Thanks in advance.
Not sure if you got this working in the end, but changing it to mp4 worked for me on my mac
frame_width = int(cap.get(3))
frame_height = int(cap.get(4))
out = cv2.VideoWriter('static/video/outpy.mp4', 0x7634706d, 20, (frame_width, frame_height))
I am working on a project which requires to do face detection on raspberry pi. I have a USB camera to do this. The frame rate was apparently very slow. So, I scaled down the capture resolution using VideoCapture.set(). This decreased the resolution to 320, 214 as I set it. This increased the capture frame rate considerably but it the feed in displayed the feed on a window on 320 X 214. I want to keep the same capture resolution but I want higher size display window. I am just a beginner to python and open cv. Please help me do it. Below is the code I wrote for just a simple camera feed.
import numpy as np
import cv2
import time
cap = cv2.VideoCapture(-1)
cap.set(3, 320) #width
cap.set(4, 216) #height
cap.set(5, 15) #frame rate
time.sleep(2)
while(cap.isOpened()):
ret, frame = cap.read()
cv2.imshow("captured video", frame)
if cv2.waitKey(33) == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
If I understand you correctly, you want the display image to be a scaled up version of the original. If so, you just need cv2.resize
display_scale = 4
height, width = frame.shape[0:2]
height_display, width_display = display_scale * height, display_scale * width
# you can choose different interpolation methods
frame_display = cv2.resize(frame, (display_width, display_height),
interpolation=cv2.INTER_CUBIC)
cv2.imshow("captured video", frame_display)
I am working in Python/OpenCV, acquiring frames from a USB webcam (Logitech C615 Camera, supposedly HD 1080p). 1080p has a 16:9 aspect ratio and thus I should be able to acquire images at all of these resolutions:
1920 x 1080
1600 x 900
1366 x 768
1280 x 720
1024 x 576
I didn't write the camera driver however, so how do I know if I am really getting these pixels off of the camera? For example, I can specify 3840 x 2160 and I get a video frame of that size!
Is there a systematic way I can evaluate/determine the real resolution or effective resolution of the camera given these different resolution settings? Below is some Python/OpenCV code to demonstrate.
import numpy as np
import cv2, cv
import time
cap = cv2.VideoCapture(0) # note you may need to pass 1 instead of 0 into this to get your camera
cap.set(3,3840) #horizontal pixels
cap.set(4,2160) #vertical pixels
cap.set(5, 15) #frame rate
time.sleep(2) #trying to solve a delay issue ... never mind this
#acquire the video from the camera
while(cap.isOpened()):
ret, frame = cap.read()
cv2.imshow("captured video", frame)
if cv2.waitKey(33) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
import cv2
cam = cv2.VideoCapture(0)
w = cam.get(cv2.CAP_PROP_FRAME_WIDTH)
h = cam.get(cv2.CAP_PROP_FRAME_HEIGHT)
print(w, h)
while cam.isOpened():
err, img = cam.read()
cv2.imshow("lalala", img)
k = cv2.waitKey(10) & 0xff
if k == 27:
break