Failing to capture video using OpenCV and Python on Ubuntu - python

I'm trying to write a simple python script to capture an image from a webcam using OpenCV. This is running on Ubuntu 11.10 32-bit.
when I run lsusb in the terminal i get:
Bus 002 Device 002: ID 045e:00f7 Microsoft Corp. LifeCam VX-1000
Which leads me to believe that the driver for the camera is installed
In a python shell I type:
capture=cv.CaptureFromCAM(0) # also tried -1, 1, 2, 3
but capture is always null.
I also tried:
capture = cv.CreateCameraCapture(0)
But i get the same results.
Would appreciate any help
Cheers,

Merely probing the driver does not validate that the camera will work.
Here is a ubuntu support page on testing your camera with vlc
Basically you should try something like :
$ vlc v4l2:///dev/video0

I don't think this camera is supported by OpenCV.
OpenCV has a compatibility list, check if yours is there.

Related

recording audio in raw or pcm format using python

I am trying to use porcupine on my Jetson Nano as a wake word.
In order to do this I need to record audio in pcm format(which I believe is raw format) using python.
I also need the sample rate to be 16,000 and 16bit linearly encoded on single channel. My input device index is 11.
So, How would I be able to record audio in this format using python?
It looks like there is a demo already setup for this from porcupine's side.
Checkout their demo here - it's a lot of code so I won't paste it all.
Essentially it requires installing the pvporcupinedemo package:
$ sudo pip3 install pvporcupinedemo
And then running the demo script (located in the Python demo) to start running the processing:
$ porcupine_demo_mic --access_key ${ACCESS_KEY} --keywords picovoice
There are various arguments to this script, which can be found documented in the repo itself.
The demo explicitly states that this should work for the Jetson Nano:
Runs on Linux (x86_64), Mac (x86_64 and arm64), Windows (x86_64),
Raspberry Pi (all variants), NVIDIA Jetson (Nano), and BeagleBone.
To make sure the demo detects your microphone, you can run the detect mic script flag:
$ porcupine_demo_mic --show_audio_devices
And you should see something like:
index: 0, device name: USB Audio Device
index: 1, device name: MacBook Air Microphone
Then you can determine which mic is correct, and use the index as an argument to the demo, e.g. for the "USB Audio Device":
$ porcupine_demo_mic --access_key ${ACCESS_KEY} --keywords picovoice --audio_device_index 0
I would then go ahead and start picking apart the code in their demo to modify it as required.

Problems with Aruco library on Debian 9.5 - OpenCV

I am trying to detect Aruco markers through my camera using OpenCV for Python 2.7 on Debian 9.5, but I can't run my code because of an errore dealing with cv2.aruco.detectMarkers(). Running it on Windows, it does not have any problem. In particular, I wrote in my code:
cv2.aruco.detectMarkers(image=gray, dictionary=aruco_dict, parameters=parameters,
cameraMatrix=camera_matrix, distCoeff=camera_distortion)
where camera_matrix and camera_distortion are respectively the camera matrix and the camera distortion parameters I got by camera calibration.
More precisely, the error says that there's no cameraMatrix input parameter for the function cv2.aruco.detectMarkers. How do I fix this problem? Thank you very much in advance.
Maybe your error is due to your opencv version. Check it with:
cv2.__version__
Older versions of opencv (such as 3.2.0, that is maybe your default version for Debian 9) do not have cameraMatrix or distCoeff as input parameters of cv2.aruco.detectMarkers function.
If you are interested in getting newer versions of opencv for your OS (such as 4.1.0.25), you have to do:
sudo pip install opencv-contrib-python==4.1.0.25
If you are not, just remove cameraMatrix and distCoeff from your inputs, it would run anyway.

Python with Gstreamer pipeline

I'm working on an Udoo trying to get the camera to take a picture that I can manipulate inside Python.
So far, the camera works with
gst-launch-1.0 imxv4l2videosrc ! imxipuvideosink
I can also take a single picture with
gst-launch-1.0 imxv4l2videosrc num-buffers=1 ! video/x-raw ! jpegenc ! filesink location=output.jpg
From here it seems like you can read straight from a gstreamer stream in Python with OpenCV.
Here is my python code:
import cv2
cam = cv2.VideoCapture("imxv4l2videosrc ! video/x-raw ! appsink")
ret, image = cam.read()
However, ret is False, and image is nothing.
Some places say this only works with OpenCV 3.0+, and others say 2.4.x, but I can't seem to find an actual answer to what version it works on.
If I need to update to OpenCV 3.0, which part to I update? I downloaded OpenCV via the apt repositories under the package python-opencv. So do I need to update Python? Can I just build OpenCV from source, and Python will automatically be using the newest version? I'm so confused.
The Ubuntu/Debian version is old 2.4.x, to get the last one you need to compile it from source.
Here two tutorials on how to do that:
https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_setup/py_setup_in_fedora/py_setup_in_fedora.html#installing-opencv-from-source
http://www.pyimagesearch.com/2015/07/20/install-opencv-3-0-and-python-3-4-on-ubuntu/
The first is for Python 2.7 on Fedora, the second for Python 3.4 on Ubuntu.

cannot get frame from openni device in python-opencv

I am using raspberry pi to get frames from ASUS Xtion openni device.
Python-opencv, OpenNI, and OpenCV are installed on raspberry pi correctly.
I am using the following code:
import cv2
import cv2.cv as cv
capture = cv2.VideoCapture(cv.CV_CAP_OPENNI)
capture.set(cv.CV_CAP_OPENNI_IMAGE_GENERATOR_OUTPUT_MODE. cv.CV_CAP_OPENNI_VGA_30HZ)
okay, color_image = capture.retrieve(0, cv.CV_CAP_OPENNI_BGR_IMAGE)
This code was working without any problems before. But now, I always get "okay" value as "false". How can I fix this problem?
Thanks,
Do you have the v4l drivers?
If not
sudo modprobe bcm2835-v4l2

Using OpenKinect on Raspberry pi in python

I am very new to raspberry pi and python.
I am trying write a progam using python on raspberry pi to use the Kinect. I aim to install OpenKinect on Raspberry pi.
So far I have done:
apt-cache search OpenKinect
sudo apt-get install python-freenect
sudo apt-get update
Next i tried writing a code in python from this link https://github.com/OpenKinect/libfreenect/blob/master/wrappers/python/demo_cv_async.py
When i try to run the programe, it says that
error in line 5,
import cv.
ImportError:no module named cv.
I am not sure if i have installed all the necessary files. I am also not sure what i have done wrong.
I also have been trying to look for tutorials on installing and using OpenKinect.
Congradtulations on starting python! That sounds like a complicated project to start on. You should probably try doing the tutorial first at python.org. I particularily like the google video tutorials (if you are a classroom kind of person): http://www.youtube.com/watch?v=tKTZoB2Vjuk
After that you can dig into more detailed stuff :)
It looks like you still dont have opencv package for python. Try to install it:
sudo apt-get install python-opencv
The OpenGL or GTK-Warning: Cannot open display. Or the other one you stated
Number of deviced found:1 GL thread write reg 0x0105 <= 0x00 freeglut(freenect-glview):
OpenGL GLX extension not supported by display ':o.o'
is because freenect does not support OpenGL. It probably uses EGL.
bmwesting (Brandt) wrote:
"The freenect library provides a demo for the Kinect called glview. The glview program will > not work with the Pi because the program is written using OpenGL. The Raspberry Pi only supports GLES through EGL.
It seems like you will be able to use libfreenect to grab the depth stream and rgb stream, > but will be unable to run the demo program because it uses the incorrect graphics API."
If you read through that thread, it should show the alternatives (i.e. ASUS XTion instead of Kinect). They reach 30fps at high (~ 1024x800) resolution for depth data if using console output mode. I plan to go for Xtion now too - and I hope to get deactivate as much as possible from the USB bus (as this seems to be the bottleneck, for the Kinect too I think).
When you install OpenCV using apt-get install python-opencv you are installing version 2. However, you can still use the methods from version 1 by doing so:
import cv2.cv as cv

Categories