I'm trying to capture photos and videos using cv2.VideoCapture and cameras with an aspect ratio of 16:9. All image is returned by OpenCV have black sidebars, cropping the image. In my example, instead of returning an image with 1280 x 720 pixels, it returns a 960 x 720 image. The same thing happens with a C920 webcam (1920 x 1080).
What am I doing wrong?
import cv2
video = cv2.VideoCapture(0)
video.set(cv2.CAP_PROP_FRAME_WIDTH, 1280)
video.set(cv2.CAP_PROP_FRAME_HEIGHT, 720)
while True:
conected, frame = video.read()
cv2.imshow("Video", frame)
if cv2.waitKey(1) == ord('s'):
video.release()
break
cv2.destroyAllWindows()
Using OpenCV:
Using Windows Camera:
I had this exact issue with a Logitech wide angle in windows camera and I was wondering about a driver problem.
So I solved it using the DirectShow driver instead of the native driver using this:
cv2.VideoCapture(cv2.CAP_DSHOW)
If you have more than one camera add the index to that value like this
cv2.VideoCapture(cv2.CAP_DSHOW + camera_index)
It will accept the desired resolution by applying the right aspect ratio without having the sidebars.
The Answer of #luismesas is completely right and worked for me.
But for people being as unskilled as I am you need to save the capture returned by cv2.VideoCapture. It is not a Parameter you can set like cv2.VideoCapture(cv2.CAP_DSHOW), it is a method.
camera_index = 0
cap = cv2.VideoCapture(camera_index, cv2.CAP_DSHOW)
ret, frame = cap.read()
Confirmed on Webcam device HD PRO WEBCAM C920.
I had the same issue too, but only on Windows 10, OpenCV 3.4 and Python 3.7.
I get the full resolution without the black side bars on a Mac OS.
I used PyGame to capture the webcam input and got the full resolution of 1920x1080 on Windows.
Just resize the received frame:
cv::Mat dst;
cv:resize(frame,dst,cv::Size(1280,720));
cv::imshow("Video",dst);
Check it!
Related
i got an usb cam that support "live mode" aka video and "snapshot mode".
The max resolution for snapshot mode is 4128 X 3096.
import cv2
from cv2 import CAP_DSHOW
cap = cv2.VideoCapture(0,CAP_DSHOW)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 4128)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 3096)
ret,frame = cap.read()
while(True):
cv2.imshow('img1',frame)
if cv2.waitKey(1) & 0xFF == ord('y'):
cv2.imwrite("c10.png",frame)
cv2.destroyAllWindows()
break
cap.release()
Doc
I tried every resolutions higher than full HD but i cannot get an image with a resolution higher than 1920*1080px.
Thanks for the help.
EDIT: Does anyone know a simple way to grab a frame from a USB camera at a desired resolution in python ? I tried a pygame example but only got a black image even if my camera is detected at [0].
we are in the middle of project - we use USB microscope Dino Lite to take some pictures but we need to set it to specific setting. And only thing I cannot find out is how to turn off LED lights (I mean like flash light). Right now I'm like here:
import cv2
cap = cv2.VideoCapture(1)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 2592)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 1944)
cap.set(cv2.CAP_PROP_EXPOSURE, -8)
cap.set(cv2.CAP_PROP_BACKLIGHT, 0) # does not work!
result, image = cap.read()
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
I guess that
cap.set(cv2.CAP_PROP_BACKLIGHT, 0)
work with some LED but not with the 'front one' that I need to turn off...if you know what I mean.
OS WIN10/11
Try using guvcview to turn off the LED light. It should stay off when you open the camera in opencv.
my problem is when i set the resolution higher than 640x480, the output colors are only in the bottom part right. The rest of the output has a blueish color.
I have a RaspyberryPi4 with 4GB ram and a PiCamera V2. The CPU usage is not more than ~65% with the highest resolution.
The same error appears also on another rapberrypi and its picamera (V2 NOIR).
Here are the Images (dont care about the white bars in the corner: they came from bad screenshooting)
640x480 - normal
1920x1080 - with error
3280x2464 - with error
Here Is my python script:
import cv2
cap = cv2.VideoCapture(0)
width = 640; height = 480
# width = 1920; height = 1080
# width = 3280; height = 2464
cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)
cv2.waitKey()
print(cap.get(cv2.CAP_PROP_FRAME_WIDTH),cap.get(cv2.CAP_PROP_FRAME_WIDTH))
while cap.isOpened():
ret, frame = cap.read()
cv2.imshow('Resolution: '+str(width)+'x'+str(height), frame)
if cv2.waitKey(25) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
I know with a highe resolution i will lose higher framerates.
Have someone an idea what the source of the error could be and/or how i can resolve this error?
Regards
I will answer the question myself:
The main problem is the picamera hardware and how the Raspberry is reading it through the Gpu.
The quick solution was to change the resolution to multiples of 32. For the FullHd case it need to be 1920*1088 instead of 1920*1080. Then the colors are normal again.
I also find out the highest resolution before the fps drop drown:
horizontal 1280*704
vertical 640*672
Every higher resolultion will drop the fps from 30+ to ~6-8.
Which part of the camera sensor is detecting/used also depends on the resolution.
For more detail read the documentation carefully ;-)
Picamera official Documentation
I get frame from video using monochromatic camera oCam-1MGN-U . When I want throw frame to the output I got 3 pictures instead of 1. I know that this camera using 1 channel. How can I resolve this problem?
if __name__ == '__main__':
cap = cv2.VideoCapture(1) # Streamming from camera monochromatic
while(cap.isOpened()):
succes, frame = cap.read()
if(succes):
cv2.imshow('Orginal',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
print('End')
Im working on linux and I got three picture very similiar to this:
https://github.com/TheImagingSource/tiscamera/issues/20
Frame what I've got has shape: 480, 640, 3
What I tried:
I've tested on OpenCV 3.2 and 3.4
Get this frame and split it b, g ,r = cv2.split(frame) and throw
only one channel to the output but still I get 3 pictures
Change resolution of video streamming
It sounds like this is an issue known since 2014:
https://github.com/TheImagingSource/tiscamera/issues/20
The OpenCV capture class is in a very sad state (not only concerning v4l2). The reason your image looks that way is that it interprets the incoming Y800 as rgb while trying to maintain the correct resolution.
This can only be fixed by either patching OpenCV or by using other means to grab images.
Suggestion for monochromatic is to use:
cv2.imdecode(frame, CV_LOAD_IMAGE_GRAYSCALE)
could you try and let us know if it works?
We're doing a project in school where we need to do basic image processing. Our goal is to use every video frame for the Raspberry Pi and do real time image processing.
We've tried to include raspistill in our python-program but so far nothing has worked. The goal of our project is to design a RC-car which follows a blue/red/whatever coloured line with help from image processing.
We thought it would be a good idea to make a python-program which does all image processing necessary, but we currently struggle with the idea of bringing recorded images into the python program. Is there a way to do this with picamera or should we try a different way?
For anyone curious, this is how our program currently looks
while True:
#camera = picamera.PiCamera()
#camera.capture('image1.jpg')
img = cv2.imread('image1.jpg')
width = img.shape[1]
height = img.shape[0]
height=height-1
for x in range (0,width):
if x>=0 and x<(width//2):
blue = img.item(height,x,0)
green = img.item(height,x,1)
red = img.item(height,x,2)
if red>green and red>blue:
OpenCV already contains functions to process live camera data.
This OpenCV documentation provides a simple example:
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
Of course, you do not want to show the image but all your processing can be done there.
Remember to sleep a few hundred milliseconds so the pi does not overheat that much.
Edit:
"how exactly would I go about it though. I used "img = cv2.imread('image1.jpg')" all the time. What do I need to use instead to get the "img" variable right here? What do I use? And what is ret, for? :)"
ret indicates whether the read was successful. Exit program if not.
The read frame is nothing other than your img = cv2.imread('image1.jpg') so your detection code should work exactly the same.
The only difference is that your image does not need to be saved and reopened. Also for debugging purposes you can save the recorded image, like:
import cv2, time
cap = cv2.VideoCapture(0)
ret, frame = cap.read()
if ret:
cv2.imwrite(time.strftime("%Y%m%d-%H%M%S"), frame)
cap.release()
You can use picamera to acquire images.
To make it "real time", you can acquire data each X milliseconds. You need to set X depending on the power of your hardware (and the complexity of the openCV algorithm).
Here's an example (from http://picamera.readthedocs.io/en/release-1.10/api_camera.html#picamera.camera.PiCamera.capture_continuous) how to acquire 60 images per second using picamera:
import time
import picamera
with picamera.PiCamera() as camera:
camera.start_preview()
try:
for i, filename in enumerate(camera.capture_continuous('image{counter:02d}.jpg')):
print(filename)
time.sleep(1)
if i == 59:
break
finally:
camera.stop_preview()