Open CV cannot read frames from USB camera on Raspberry Pi - python

I have a Logitech c270 usb webcam connected to my Raspberry Pi 3, running on a Jessie image. I have tried to capture frames with this simple tutorial code on
http://www.pyimagesearch.com/2016/02/22/writing-to-video-with-opencv/
Whenever I try to read frames in the while loop, it gives out this error:
NoneType object has no attribute 'shape'
I have printed out the vs.read() function and it also returns None object.
What can I do to resolve this problem?
NOTE: When I executed cmake to build the binaries for Open CV 3.1 on Raspberry Pi, I havent specified OpenCV to use V4L. Could this be a problem?
Thanks in advance.

It's Because your video stream object does not get attached with the camera. Hence no image is displayed.
If you are using Pi Camera, then make sure to type --picamera 1 as an argument while running the script.
else your camera is not connected to your Pi correctly.

Related

Python: How to get image from USB Webcam?

I am very new in programming with python :)
My Setup:
Windows 11 Pro
Python 3.7
OpenCV 4.7
Webcam: HD Pro Webcam C920 from logitech
I want to access the camera image in Python. Everything works with my integrated camera.
When I want to access the USB camera, nothing works. I do not get video from my USB webcam.
I try the following code to see, if there is a camera.
import cv2
cap = cv2.VideoCapture(0)
print(cap.isOpened())
It works with my integrated Webcam. I get a True
With cap = cv2.VideoCapture(2) i get False
But if i try this with cap = cv2.VideoCapture(1) i get nothing back and there is no error. The program just keeps running. My conclusion: Somehow the USB camera has to work. Otherwise I would also expect a False for cap = VideoCapture(1).
I have already found and tried many things on the Internet. Unfortunately, nothing has helped.
I have also tried pygame, but that does not seem to run under windows.
How do I get video from my USB camera?
Thanks for your help :)

Change to USB camera [duplicate]

I copied code from https://stackoverflow.com/a/34588758/210342 and used with default (built-in) camera, it worked. Then I attached USB camera, tested it with VLC and changed the code to open camera 1:
cam = cv2.VideoCapture(1)
I check whether the camera is open cam.isOpened() -- it is -- but the camera is not enabled (its hardware indicator, LED, is off) and indeed all I see on the screen is black frame.
Is there some extra special code to add in order to enable USB camera?
You can also refer this link here
https://devtalk.nvidia.com/default/topic/1027250/how-to-use-usb-webcam-in-jetson-tx2-with-python-and-opencv-/
Here he changes the line below to
cap = cv2.VideoCapture("/dev/video1") # check this
Before plugging in the camera, go to your terminal home
Type cd /dev
Type ls video and then press tab, if you find only result as video0, that means only webcam is present.
Now repeat 1 to 2 with USB webcam plugged in. You should find video1 or video2 when you repeat the steps.
I ran into the same problem, turns out sometimes the webcam can take both slots 0 and 1.
So cam = cv2.VideoCapture(2) worked for me. This was found using the cd /dev-method above.
Are you sure the usb camera is camera 1, i've done this before and had to use cv2.VideoCapture(0)
I do not know why but on my laptop (Acer Aspire 3) the usb webcam works with python opencv only if I plug it in the right side usb of my laptop and NOT if I plug it in the left side usb. So try plugging the webcam on all the usb ports you have. (I also had to use cam = cv2.VideoCapture(2) as #Slayahh suggested.
in accordance to the accepted answer and this https://stackoverflow.com/a/60603969/4451944
i realized cv2.VideoCapture(4) the parameter 4 is directly proportional to the file suffix of /dev/video4

Capturing images with Raspberry Pi Camera and having a python code save them in a server in Windows

I have a Raspberry Pi 3 with a PiCamera system that captures images automatically. It captures 500-2000 images per run. I need it to be saved in a server in my windows pc/laptop. Im using python to code. Does anyone know how I can do this?
To reiterate, can anyone show me an example or guide me on how to code the raspberry to save the newly captured images to the server.
What I have in mind is to have the capturing of images code WITH the code to save the images in the server of my windows pc/laptop ON the Raspberry Pi board itself (saved in a microSD).
Thank you and I apologize if this is confusing. It's my first time working with Raspberry
So i'll include more context to the situation.
A run would last for 1 hour minimum and 2 hours maximum.
Pixel dimension per image: 1344 x 1344 (w x h).
Pixel size per image: 1.74MB
The time for windows to read the images is critical. We're looking at within
2-3 seconds as I have an Automatic Visual Inspection (AVI) system analyzing
these images real time.
The Raspberry Pi and Windows may be connected by wifi or wired-Ethernet. Both
is viable. But right now I am testing it using wifi. Though I am able to use
wired-Ethernet as well.
To give additional information, the AVI system would produce an image with the size of 12.3KB per image after the raw images have been processed. Which would be saved in a separate folder. Meaning to say there are 2 folders of the images, where the 1st folders contains the original images and the 2nd folder containing the processed/analyzed images.
Im facing an error as such;
The ipconfig from Windows;
The NET SHARE in windows command;
There are lots of possibilities... but your question is light on details.
How long is a "run" in terms of time?
How big is each image in pixel height and width and in bytes saved on disk?
Is the time from when the images is captured to when it is saved on disk on Windows critical - I mean does Windows need to see the image within a second or 20 seconds or 15 minutes of the image being taken?
How are the Raspberry Pi and Windows connected - bluetooth, wifi, wired Ethernet?
In the meantime, some options are:
Make a "Share" on Windows that is visible to the Raspberry Pi and mount it using Samba on the Raspberry Pi and write your images there directly and Windows will see the images on the shared disk. See here.
Create a FAT32 parition on a USB Memory Stick and plug it into the Raspberry Pi's USB port and write your images there, then plug it into your Windows machine at the end to transfer the images. Both Linux and Windows can read/write FAT32, so that is why I suggest that format.
Use netcat or a socket or ssh to send the files to Windows over the network after capture.
Use Putty from Windows to collect the files from the Raspberry Pi after capture.
Share a Redis instance between the Raspberry Pi and Windows. Let Raspberry Pi load images into Redis and push their names onto a Redis queue and let Windows pop the names off the queue and pick up the images.

Taking a snapshot from webcam during using the camera in anther function

I am using the camera to detect cars 24/7 using Python, Opencv and a normal usb webcam.
In order to take a snapshot I made a function to call it when needed
def SendPic () :
capture = cv.CaptureFromCAM(0)
img = cv.QueryFrame(capture)
cv.SaveImage('pic.jpg', img)
It works fine when used alone but when used inside my code this error comes up
libv4l1: error setting pixformat: Device or resource busy
HIGHGUI ERROR: libv4l unable to ioctl VIDIOCSPICT
And the image is not saved or even captured
How to take this snapshot without stopping the camera from detecting the cars? What command can I use to stop the camera to take a capture and then return to its main function?
If I was You, I'd make the part taking pictures a separate module, and make the car detection module and snapshot module call to the first one using mutexes. You cannot have two separate entities controlling the same hardware piece.

grab frame NTSCtoUSB dongle, opencv2, python wrapper

Context:
I have been playing around with python's wrapper for opencv2.
I wanted to play with a few ideas and use a wide angle camera similar to 'rear view' cameras in cars.
I got one from a scrapped crash car (its got 4 wires) I took an educated guess from the wires color codding, connect it up so that I power the power and ground line from a usb type A and feed the NTSC composite+ composite- from an RCA connector.
I bought a NTSC to usb converter like this one.
It came with drivers and some off the shelf VHStoDVD software.
the problem:
I used the run of the mill examples online to trial test it like this:
import numpy as np
import cv2
cam_index=0
cap=cv2.VideoCapture(cam_index)
print cap.isOpened()
ret, frame=cap.read()
#print frame.shape[0]
#print frame.shape[1]
while (cap.isOpened()):
ret, frame=cap.read()
#gray=cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imshow('frame', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
#release and close
cap.release()
cv2.destroyAllWindows()
this is the output from shell:
True
Traceback (most recent call last):
File "C:/../cam_capture_.py", line 19, in <module>
cv2.imshow('frame', frame)
error: ..\..\..\..\opencv\modules\highgui\src\window.cpp:261: error: (-215) size.width>0 && size.height>0 in function cv::imshow
>>>
key Observations:
SCREENSHOTS
in control panel the usb dongle is shown as 'OEM capture' in Sound Video & Game controllers . So it's not seen as a simple plug and play Webcam in 'Imaging devices'
If I open the VHStoDVD software I need to configure 2 aspects:
set as Composite
set enconding as NTSC
then the camera feed from the analog camera is shown OK within the VHStoDVD application
When I open the device video channel in FLV (device capture). The device stream is just a black screen but IF i open the VHStoDVD software WHILE flv is streaming I get the camera's feed to stream on FLV and a black screen is shown on the VHStoDVD feed. Another important difference is that there is huge latency of aprox 0.5sec when the feed is in FLV as opposed to running in VHStoDVD.
When running "cam_capture.py" as per the sample code above at some put during runtime i will eventually get a stop error code 0x0000008e:
detail:
stop: 0x0000008E (0xC0000005, 0xB8B5F417, 0X9DC979F4, 0X00000000 )
ks.sys - Address B8B5F417 base at B8B5900, Datestamp...
beg mem dump
phy mem dump complete
5.if i try to print frame.shape[0] or frame.shape[1] I get a type error say I cannot print type None
6.if try other cam_index the result is always false
TLDR:
In 'control panel' the camera device is under 'sound video & game controllers' not under 'imaging devices';
The cam_index==zero;
The capture.isOpened()=True;
The frame size is None;
If VHStoDVD is running with composite NTSC configured the camera works , obviously you cant see the image with printscreen in attachment but trust me ! ;)
Is there any form of initialisation of the start of communication with the dongle that could fix this i.e. emulate VHStoDVD settings (composite+NTSC)? I thought I could buspirate the start of comms between VHStoDVD and the dongle but it feels like I am going above and beyond to do something I thought was a key turn solution.
Any constructive insights, suggestion , corrections are most welcome!
Thanks
Cheers
Ok , so after deeper investigation the initial suspicion was confirmed i.e. because the NTSC dongle is not handled as an imaging device (it's seen as a Video Controller , so similar to an emulation of a TV Tuner card ) it means that although we are able to call cv2.VideoCapture with cam_index=0 the video channel itself is not transmitting because we are required to define a bunch of parameters
encoding
frame size
fps rate etc
The problem is because the device is not supported as an imaging device calling cv2.VideoCapture.set(parameter, value) doesn't seem to change anything on the original video feed.
I didn't find a solution but I found a work around. There seems to be quite a few options online. Search for keywords DV to webcam or camcorder as a webcam.
I used DVdriver (http://www.trackerpod.com/TCamWeb/download.htm) (i used the trial because I am cheap!).
Why does it work?
As much as I can tell DVdriver receives the data from the device which is set as a Video controller (similar to a capture from "Windows Movie Maker" or ffmpeg) and then through "fairydust" outputs the frames on cam_index=0 (assumed no other cam connected) as an 'imaging device' webcam.
Summary
TLDR use DVdriver or similar.
I found a workaround but I would really like to understand it from first principles and possible generate a similar initialisation of the NTSC dongle from within python, without any other software dependencies but until then, hopefully this will help others who were also struggling or assuming it was a hardware issue.
I will now leave you with some Beckett:
Ever tried. Ever failed. No matter. Try again. Fail again. Fail better. (!)
It's a few months late, but might be useful. I was working on a Windows computer and had installed the drivers that came with the device, I tried the same code as your question with an Ezcap from Somagic and got the same error. Since "frame is None," I decided to try an if statement around it - in case it was an initialization error. Placing into the loop:
if frame is None:
print 0
else:
print 1
The result is: 01110111111111111111111111111...
And if the frame = cap.read(), above the loop is commented out - I get: 00111111111111111...
So for my device capture device it appears to be working for all frames beyond the 5th are captured. I'm not sure why this is, but it might be a useful work around for now.
Disclaimer: Unfortunately, my camera input is currently in a radiation field so I can't get to it for a couple of weeks to make sure it works for sure. However, the images are currently a black frame (which is expected without proper input).
I faced the same issue. As a workaround, I first tried the solution proposed by #user3380927 and it worked indeed. But since I didn't want to rely on an external software, I started tweaking parameters using opencv in Python.
This lines of code worked like a charm (you have to insert them before reading the frame for the first time):
cam.set(cv2.CAP_FFMPEG,True)
cam.set(cv2.CAP_PROP_FPS,30)
So, the full code for basic camera reading is as follows:
import cv2
cam = cv2.VideoCapture(1)
cam.set(cv2.CAP_FFMPEG,True)
cam.set(cv2.CAP_PROP_FPS,30)
while(True):
ret,frame = cam.read()
cv2.imshow('frame',frame)
if (cv2.waitKey(1) & 0xFF == ord('q')):
break
cam.release()
cv2.destroyAllWindows()
You can then apply image processing operations as usual. Just for reference, this was my configuration:
Opencv 3.1.0
Python 2.7.5
Windows 8.1
Elgato Video Capture device (this was also shown as Sound Video & Game controllers)

Categories