I am using OpenCV (2.4) and Python (2.7.3) with a USB camera from Thorlabs (DC1545M).
I am doing some image analysis on a video stream and I would like to be able to change some of the camera parameters from my video stream. The confusing thing is that I am able to change some of the camera properties but not all of them, and I am unsure of what I am doing wrong.
Here is the code, using the cv2 bindings in Python, and I can confirm that it runs:
import cv2
#capture from camera at location 0
cap = cv2.VideoCapture(0)
#set the width and height, and UNSUCCESSFULLY set the exposure time
cap.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, 1280)
cap.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, 1024)
cap.set(cv2.cv.CV_CAP_PROP_EXPOSURE, 0.1)
while True:
ret, img = cap.read()
cv2.imshow("input", img)
#cv2.imshow("thresholded", imgray*thresh2)
key = cv2.waitKey(10)
if key == 27:
break
cv2.destroyAllWindows()
cv2.VideoCapture(0).release()
For reference, the first argument in the cap.set() command refers to the enumeration of the camera properties, listed below:
0. CV_CAP_PROP_POS_MSEC Current position of the video file in milliseconds.
1. CV_CAP_PROP_POS_FRAMES 0-based index of the frame to be decoded/captured next.
2. CV_CAP_PROP_POS_AVI_RATIO Relative position of the video file
3. CV_CAP_PROP_FRAME_WIDTH Width of the frames in the video stream.
4. CV_CAP_PROP_FRAME_HEIGHT Height of the frames in the video stream.
5. CV_CAP_PROP_FPS Frame rate.
6. CV_CAP_PROP_FOURCC 4-character code of codec.
7. CV_CAP_PROP_FRAME_COUNT Number of frames in the video file.
8. CV_CAP_PROP_FORMAT Format of the Mat objects returned by retrieve() .
9. CV_CAP_PROP_MODE Backend-specific value indicating the current capture mode.
10. CV_CAP_PROP_BRIGHTNESS Brightness of the image (only for cameras).
11. CV_CAP_PROP_CONTRAST Contrast of the image (only for cameras).
12. CV_CAP_PROP_SATURATION Saturation of the image (only for cameras).
13. CV_CAP_PROP_HUE Hue of the image (only for cameras).
14. CV_CAP_PROP_GAIN Gain of the image (only for cameras).
15. CV_CAP_PROP_EXPOSURE Exposure (only for cameras).
16. CV_CAP_PROP_CONVERT_RGB Boolean flags indicating whether images should be converted to RGB.
17. CV_CAP_PROP_WHITE_BALANCE Currently unsupported
18. CV_CAP_PROP_RECTIFICATION Rectification flag for stereo cameras (note: only supported by DC1394 v 2.x backend currently)
(Please note, as commenter Markus Weber pointed out below, in OpenCV 4 you have to remove the "CV" prefix from the property name, eg
cv2.CV_CAP_PROP_FRAME_HEIGHT -> cv2.CAP_PROP_FRAME_HEIGHT)
My questions are:
Is it possible to set camera exposure time (or the other camera parameters) through python/opencv?
If not, how would I go about setting these parameters?
Note: There is C++ code provided by the camera manufacturer showing how to do this, but I'm not an expert (by a long shot) in C++ and would appreciate any python-based solution.
Not all parameters are supported by all cameras - actually, they are one of the most troublesome part of the OpenCV library. Each camera type - from android cameras to USB cameras to professional ones offer a different interface to modify its parameters. There are many branches in OpenCV code to support as many of them, but of course not all possibilities are covered.
What you can do is to investigate your camera driver, write a patch for OpenCV and send it to code.opencv.org. This way others will enjoy your work, the same way you enjoy others'.
There is also a possibility that your camera does not support your request - most USB cams are cheap and simple. Maybe that parameter is just not available for modifications.
If you are sure the camera supports a given param (you say the camera manufacturer provides some code) and do not want to mess with OpenCV, you can wrap that sample code in C++ with boost::python, to make it available in Python. Then, enjoy using it.
I had the same problem with openCV on Raspberry Pi... don't know if this can solve your problem, but what worked for me was
import time
import cv2
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FRAME_WIDTH, 1280)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 1024)
time.sleep(2)
cap.set(cv2.CAP_PROP_EXPOSURE, -8.0)
the time you have to use can be different
To avoid using integer values to identify the VideoCapture properties, one can use, e.g., cv2.cv.CV_CAP_PROP_FPS in OpenCV 2.4 and cv2.CAP_PROP_FPS in OpenCV 3.0. (See also Stefan's comment below.)
Here a utility function that works for both OpenCV 2.4 and 3.0:
# returns OpenCV VideoCapture property id given, e.g., "FPS"
def capPropId(prop):
return getattr(cv2 if OPCV3 else cv2.cv,
("" if OPCV3 else "CV_") + "CAP_PROP_" + prop)
OPCV3 is set earlier in my utilities code like this:
from pkg_resources import parse_version
OPCV3 = parse_version(cv2.__version__) >= parse_version('3')
I wasn't able to fix the problem OpenCV either, but a video4linux (V4L2) workaround does work with OpenCV when using Linux. At least, it does on my Raspberry Pi with Rasbian and my cheap webcam. This is not as solid, light and portable as you'd like it to be, but for some situations it might be very useful nevertheless.
Make sure you have the v4l2-ctl application installed, e.g. from the Debian v4l-utils package. Than run (before running the python application, or from within) the command:
v4l2-ctl -d /dev/video1 -c exposure_auto=1 -c exposure_auto_priority=0 -c exposure_absolute=10
It overwrites your camera shutter time to manual settings and changes the shutter time (in ms?) with the last parameter to (in this example) 10. The lower this value, the darker the image.
If anyone is still wondering what the value in CV_CAP_PROP_EXPOSURE might be:
Depends. For my cheap webcam I have to enter the desired value directly, e.g. 0.1 for 1/10s. For my expensive industrial camera I have to enter -5 to get an exposure time of 2^-5s = 1/32s.
Related
I am trying to change the resolution of PS5 camera in OpenCV, Python.
The problem is that PS5 Camera officially isn't supported on PC, and I have to use custom camera drivers from GitHub: https://github.com/Hackinside/PS5_camera_files
Default image resolution by this code is 640x376
self.capture = cv2.VideoCapture(name)
I found out that supported resolutions of this camera are 640x376 and 5148×1088, so I tried to do next:
res = self.capture.set(cv2.CAP_PROP_FRAME_WIDTH, 5148)
res = self.capture.set(cv2.CAP_PROP_FRAME_HEIGHT, 1088)
But in both cases res is False, and resolution doesn't change. I can recieve only small resolution frame.
Camera 100% can work in 5148×1088, because if I launch Windows Camera application it shows me high quality images
Okay, the problem was, that I had a piece of code, where I read a frame from the capture using a loop:
while True:
self.capture.read()
It was in a parallel thread so changing the resolution was at the same time as reading images. It was a reason why the change resolution process always failed.
So the provided code in question should work if you do it before starting reading images.
I am trying to write a code for detecting the color green from a live video. I want to make a detector so that whenever the color green pops up in the screen, a counter starts counting how many times the color appears.
So for the video source, I am using the OBS Virtual Camera. But I have no idea how to input it as the source. I have seen codes inputting web cams as the source as shown below:
import numpy as np
import cv2
# Capturing video through webcam
webcam = cv2.VideoCapture(0)
Anyone have any idea how I can input the OBS virtual cam? Or does anyone know any alternative like switching to another language to do said task?
Windows will treat OBS Virtual Camera as a regular camera. The argument for cv2.VideoCapture is camera number. So up that number by 1 over and over again until the program uses the OBS Virtual Camera. And there you go.
Keep in mind that there is a bug currently reported that opencv is not parsing the stream from OBS virtual cam and just showing a black background.
https://github.com/obsproject/obs-studio/issues/3635
I'm interested in using a stereo camera for calculating depth in video/images. The camera is a USB 3.0 Stereoscopic camera from Leopard Imaging https://www.leopardimaging.com/LI-USB30-V024STEREO.html. I'm using MAC OS X btw.
I was told by their customer support that it's a "UVC" camera. When connected to an apple computer it gives a greenish image.
My end goal is to use OpenCV to grab the left and right frames from both lenses so that I can calculate depth. I'm familiar with OpenCV, but not familiar with working with stereo cameras. Any help would be much appreciated. So far I have been doing this in Python 3:
import numpy as np
import cv2
import sys
from matplotlib import pyplot as plt
import pdb; pdb.set_trace()
print("Camera 1 capture", file=sys.stderr)
cap = cv2.VideoCapture(1)
print("Entering while", file=sys.stderr)
while(True):
_ = cap.grab()
retVal, frame = cap.retrieve()
#gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imshow('frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
This works, but it only gives me a green picture/image with no depth. Any advice on how to get both left and right frames from the camera?
The leopard imaging people sent me a clue but I'm not able to make progress because i guess some missing file in my core file. However, I thought it might help somebody. So I'm posting it as an answer.
The message was sent by one of the leopard imaging people I contacted through mail. It goes like this.....
We have a customer who successfully separated the two videos on Linux OS one year ago. I tried to contact him to see if he can share the source code with us. Unfortunately, he already left the former company, but he still found some notes (below). Hope it helps.
The camera combines the image from the two sensors into one 16-bit pixel data (The high 8 bits from one camera, and the low 8 bit from the other camera).
To fix this problem in linux opencv, should skip color transform by opencv: at
modules/videoio/src/cap_v4l.cpp static IplImage* icvRetrieveFrameCAM_V4L( CvCaptureCAM_V4L* capture, int)
case V4L2_PIX_FMT_YUYV:
#if 1
/*
skip color convert. Just copy image buffer to frame.imageData
*/
memcpy(capture->frame.imageData, capture->buffers[capture->bufferIndex].start, capture->buffers[capture->bufferIndex].length);
#else
yuyv_to_rgb24(capture->form.fmt.pix.width, capture->form.fmt.pix.height,(unsigned char*) capture->buffers[capture->bufferIndex].start),
(unsigned char*)capture->frame.imageData);
#endif
Hope this helps.
DISCLAIMER : I went forward with the C++ version(my project was in C++) given in libuvc used the provided routines to obtain the left and right frames separately.
I am chasing the same issue, but with C/C++. I have contacted Leopard and am waiting for an answer. My understanding is that the two grayscale cameras are interlaced into a single image (and I think that OpenCV sees this as a color image, hence the strange colors and being out of focus.) You need to break apart the bytes into two separate frames. I am experimenting, trying to figure out the byte placement, but have not gotten too far. If you figure this out, please let me know!
They have a C# on Windows example here:
https://www.dropbox.com/sh/49cpwx0s70fuich/2e0_mFTJY_
Unfortunately it is using their libraries (which are not source) to do the heavy lifting, so I can't figure out what they are doing.
I met the same issue as you and finally arrived to a solution. But I don't know if OpenCV can handle it directly, especially in Python.
As jordanthompson said, the two images are interlaced into one. The image you receive is in YUYV (Y is the light intensity, UV contains the color information). Each pixel is coded on 16 bits, 8 for Y, 8 for U or V depending on which pixel you are looking at.
Here, the Y bits come from the left image, the UV bits from the right image. But when OpenCV receives this image, it converts it to RGB which then definitely mix the two images.. I could not find a way in Python to tell OpenCV to get the image without converting it... Therefore we need to read the image before OpenCV. I managed to do it with libuvc (https://github.com/ktossell/libuvc) after adding two small functions to perform the proper conversions. I guess you can use this library if you can use openCV in C++ instead of Python. If you really have to stick with Python, then I don't have a complete solution, but at least now you know what to look for: try to read the image directly in YUYV and then separate the bits into left and right image (you will get two grayscale images).
Good luck !
I have a computer connected to two external cameras and am using the Python interface to openCV to do real-time video analysis for instrument control. I love it.
But there is an odd quirk: the numbers assigned to the two cameras seem to be switching. That is, my program will be working with the correct camera. Then, if I run it again, there is a decent probability that it will grab the other camera. To temporarily correct this I can toggle between the cameras by changing the integer in the command to initialize the camera:
cap = cv2.VideoCapture(0)
vs.
cap = cv2.VideoCapture(1)
or back again. But, no matter which number I choose, there is still a chance that I'll get the wrong camera. Do any of you have any ideas for how to choose the correct camera every time?
I'm using Opencv 2.4.5 with python 2.7 to track people in video surveillance. At the beginning I used .avi and .mpeg videos to test my code, now I want to use a hcv-m100c camera. I am using a simple difference between frames (an initial frame compared with each frame) to identify the objects in movement, It works very well with the .avi and .mpeg videos I have, but when I use the camera the results are so bad because a lot of noise and stains appear in my video. I thought that the problem was my camera, but I made an .avi video with the same camera and I tested that video with my code and it works fine.
Now, I'm using the cv2.BackgroundSubtractorMOG but the problem is still there.
So, I think I need to do a pre-processing when I use the camera
Just for completeness:
Solution concept:
Possibly you could stream the video camera with something like ffmpeg which can transcode as well and then use OpenCV to read the network stream. It might be easier to use VLC to stream instead.
Solution detail:
VLC Stream code (Shell):
vlc "http://192.168.180.60:82/videostream.cgi?user=admin&pwd=" --sout "#transcode{vcodec=mp2v,vb=800,scale=1,acodec=mpga,ab=128,channels=2,samplerate=??44100}:duplicate{dst=rtp{sdp=rtsp://:8554/output.mpeg},dst=display}" --sout-keep
OpenCV Code (Python):
cap=cv2.VideoCapture("rtsp://:8554/output.mpeg")