Grabbing Analog video into python using opencv - python

Well, it seems like my question had been asked many times before and unfortunately, no one replied. I hope someone will help.
I have an Easycap device that converts the analog images from my analog camera to digital signals through a USB port.
The device is identified to the system in the Device Manager under "Sound, Video and game controllers" category as "SMI Grabber Device".
I use a simple Python code to display the video from this device. I also have an embedded webcam in my laptop.
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Display the resulting frame
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
if cv2.waitKey(1) & 0xFF == ord('s'):
cv2.imwrite('screenshot.jpg',frame)
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
First, when I unplug the Easycap, CaptureVideo(0) returns the embedded webcam video stream. However, when I plug the Easycap, an error appears:
"Traceback (most recent call last):
File "C:\Users\DELL\Desktop\code\cam.py", line 10, in
cv2.imshow('frame',frame)
error: ......\src\opencv\modules\highgui\src\window.cpp:261: error: (-215) size.width>0 && size.height>0"
Notice that, any number except 0 makes the program display the webcam image. So if I tried cap = cv2.CaptureVideo(1), it will show the webcam, cap = cv2.CaptureVideo(20) is the same.
I also tried to enter "SMI Grabber Device" instead of 0 or 1 in the VideoCaptureconstructor function, but it didn't make any difference.
I'm using Windows 8, and I've installed the accompanying driver for Easycap. The software that comes with the driver (called ULead) works fine and display the CCTV camera video. I tried to display the images while I'm closing that program, and without, the result is the same.
I used before a C# program with Aforge library which had getCamList method or something which allowed me to choose the specific device I want to display from a comboBox. I can't find a similar function is opencv.
I'm using OpenCV 2.4.6. I didn't try the code on prior versions.
I really need to understand why this code doesn't work, knowing that I'm just a very beginner of opencv and image processing.
I hope someone can help.

I am using EasyCAP too.
You must check that ret is True.
I am use below code
while True:
ret, frame = vc.read()
if ret:
break
cv2.waitKey(10)
h, w = frame.shape[:2]
print h, w
while True:
ret, frame = vc.read()
if ret:
cv2.imshow(WID, frame)
if cv2.waitKey(1) == 27:
break

Let there be light!
On serious note, I struggled with the same problem and I hope this helps!
the original thread + answer

Related

OpenCV's VideoCapture malfunctioning when fed from IP Camera

I'm simply trying to read IP Camera live stream through OpenCV's simple code, i.e as follows:
import numpy as np
import cv2
src = 'rtsp://id:pass#xx.xx.xx.xx'
cap = cv2.VideoCapture(src)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
The problem here is, sometime it works like a charm by showing the running live video, but sometime else it creates a lot of blank windows which keeps popping up until the job is killed. Like the below image:
Why does it happen, also how can we avoid it?
Maybe you should cover the case that the video capture fails to establish a healthy stream.
Note that it is possible to not to receive a frame in some cases even though video capture opens. This can happen due to various reasons such as congested network traffic, insufficient computational resources, power saving mode of some IP cameras.
Therefore, I would suggest you to check in the frame size and make sure that your VideoCapture object is receiving the frame at right shape. (You can debug and see the size of a visible frame to learn the expected resolution of the camera.)
A change in your loop like following might help
min_expected_frame_size = [some integer]
while(cap.isOpened()):
ret, frame = cap.read()
width = cap.get(cv2.CAP_PROP_FRAME_WIDTH)
height = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)
if ret==True and ((width*height) >= min_expected_frame_size):
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break

openCV: Issue with cv2.VideoCapture(0) and cv2.VideoCapture(-1)

After cap.release() the only Frame is getting closed, webcam light is still ON.
import cv2
cap = cv2.VideoCapture(0)
#cap = cv2.VideoCapture(-1) if i give '-1' instead of '0' then light is getting OFF
#but camera is not working because i don't have second camera to laptop.
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cap.isOpened() #returns False
cv2.destroyAllWindows()
By pressing 'q', the Frame is getting closed but webcam light is still ON.
How to OFF the webcam? (It is getting OFF after python shell is closed.)
If possible, tell me the path of cv2.VideoCapture() class source code.
Set OPENCV_VIDEOIO_PRIORITY_MSMF=0 in your environment variables. Seems like there is an instance leak in opencv library. If you're on windows maybe use setx in your cmd to set the value setx OPENCV_VIDEOIO_PRIORITY_MSMF 0.
Reference to the issue : here
And it looks like the issue has been fixed too. So try updating your opencv library or reinstalling altogether.
That should solve your problem.

OpenCV Python Assertion Failed

I am trying to run opencv-python==3.3.0.10 on a macOS 10.12.6 to read from a file and show the video in a window. I have exactly copied the code from here http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_gui/py_video_display/py_video_display.html, section 'Playing' video from file.
The code runs correctly and shows the video, however after termination of the video it breaks the program, giving the following error:
Assertion failed: (ec == 0), function unlock, file /BuildRoot/Library/Caches/com.apple.xbs/Sources/libcxx/libcxx-307.5/src/mutex.cpp, line 48.
Does anyone have any idea of what might cause this?
Code snippet for your convenience (edited to include some suggestions in the comment)
cap = cv2.VideoCapture('vtest.avi')
while(True):
ret, frame = cap.read()
if not ret:
break
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cv2.destroyAllWindows()
It's not clear from your question, but it looks like you're specifically running into a situation where the video completes playing without being interrupted. I think the issue is that the VideoCapture object is already closed by the time you get to cap.release(). I'd recommend putting the call to release inside of the if statement by the break.
I've not had time to experiment, but I normally follow this pattern:
reader = cv2.VideoCapture(<stuff>)
while True:
success, frame = reader.read()
if not success:
break
I'd not had to call release explicitly in those contexts.

OpenCV: Face detection taking advantage of a command line

I run this (first one) example that launches the webcam of my latop so that I can see myself on the screen.
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
I installed OpenBr on Ubuntu 14.04 LTS and I run successfully this command on a picture of myself:
br - gui -algorithm ShowFaceDetection -enrollAll -enroll /home/nakkini/Desktop/myself.png
The above command I run on the Terminal displays my picture and draws a square around my face (face detection), it also highlights my eyes in green.
My Dream:
I wonder if there is a way to combine this command with the short program above so that when the webcam is launched I can see my face surrounded by the green rectangle ?
Why do I need this ?
I found similar programs in pure OpenCV/Python for this purpos. However, for later needs, I need more things than the simple face detection and I judge by myself that OpenBR will save me lot of headache. That is why I am looking for a way to run the command line somewhere inside the code above as a first but big step.
Hints:
The frame in the code corresponds to myself.png of the command line. The solution to be found will try to pass frame in the place of myself.png to the command line within the program itself.
Thank you very much in advance.
EDIT:
After correcting the typos of #Xavier's solution I have no errors. However the program does not run as I want it:
First, the camer is launched and I see myself but my face is not detected with a green rectangle. Secondly, I press any key to exit but the program does not exit: it shows me a picture of myself with my face detected. A last key press exists the program. My goal is to see my face detected during the camera functionment.
you do not need openbr for this at all.
just see opencv's python face-detect tutorial
something like this should work
import numpy as np
import cv2
import os
cap = cv2.VideoCapture(0)
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
cv2.imwrite( "/home/nakkini/Desktop/myself.png", gray );
os.system('br - gui -algorithm -ShowFaceDetection -enrollAll -enroll /home/nakkini/Desktop/myself.png')
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()

Why does the examples from the introductory OpenCV video tutorial throw errors and what is the preferred work around?

When testing the video playback example from the OpenCV introductory video tutorial, my videos (.m4v and .mov) always freeze for a little bit after they are done and then throws this error message:
---------------------------------------------------------------------------
error Traceback (most recent call last)
<ipython-input-33-6ff11ed068b5> in <module>()
15
16 # Our operations on the frame come here
---> 17 gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
18
19 # Display the resulting frame
error: /home/user/opencv-2.4.9/modules/imgproc/src/color.cpp:3737: error: (-215) scn == 3 || scn == 4 in function cvtColor
My interpretation is that this is because the last frame will be empty and lack the channels that cvtColor() expects and the image can therefore not be displayed. If I modify the example code slightly and replace the while(True) with a for loop that ends after the last frame in the video, I get no such message and the video window closes instead of freezing.
However, I assume there is a reason to why this is not the default behaviour in OpenCV and I'm afraid my modification will screw something up further down the road (completely new to OpenCV). So now I would like input on a few things:
Why is the while(True) the default for showing the video, since it is freezing and throwing an error message (if this is not unique to my setup)?
Is it safe to have the for loop or should I stick to while(True) and wait for the error message every time I play a video?
Is there a preferred way to have OpenCV exit video playback gracefully?
I have tried the suggestions here and they did help with not freezing up the kernel completely, but the video still freezes and the error message still shows. The for loop alternative seems smoother.
I'm using the Ipython Notebook with Python 2.7.9 and OpenCV 2.4.9 on Ubuntu 14.04. Below is the code I'm executing.
import cv2
video_name = '/path/to/video'
cap = cv2.VideoCapture(video_name)
# while(True): #causes freeze and throws error
for num in range(0,int(cap.get(cv2.cv.CV_CAP_PROP_FRAME_COUNT))):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame',gray)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
Since I ended up wanting to loop the video at some occasions, the following proved to be the best solution for me.
import numpy as np
import cv2
cap = cv2.VideoCapture('/path/to/vid.mp4')
frame_counter = 0
loop = True
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
# Our operations on the frame come here
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Display the resulting frame
cv2.imshow('frame',gray)
frame_counter += 1
if frame_counter == cap.get(cv2.cv.CV_CAP_PROP_FRAME_COUNT):
if loop == True:
frame_counter = 0
cap = cv2.VideoCapture(video_name)
else:
break
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
If you don't need to loop you videos just use
if frame_counter == cap.get(cv2.cv.CV_CAP_PROP_FRAME_COUNT):
break
or Padraic's solution (slightly modified)
try:
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
except cv2.error:
break
The try/except statement does give a short delay where the video freezes before closing. The if statement closes it immediately when playback is finished.
I am still interested if someone can explain while the error message is encountered in the first place since padraic said this is not the case on his machine.
EDIT: So I noticed I misread the tutorial and should use while(cap.isOpened()) instead of while(True), but I still get the same error with while(cap.isOpened()).

Categories