I am using raspberry pi to get frames from ASUS Xtion openni device.
Python-opencv, OpenNI, and OpenCV are installed on raspberry pi correctly.
I am using the following code:
import cv2
import cv2.cv as cv
capture = cv2.VideoCapture(cv.CV_CAP_OPENNI)
capture.set(cv.CV_CAP_OPENNI_IMAGE_GENERATOR_OUTPUT_MODE. cv.CV_CAP_OPENNI_VGA_30HZ)
okay, color_image = capture.retrieve(0, cv.CV_CAP_OPENNI_BGR_IMAGE)
This code was working without any problems before. But now, I always get "okay" value as "false". How can I fix this problem?
Thanks,
Do you have the v4l drivers?
If not
sudo modprobe bcm2835-v4l2
Related
I want to decode a QR-Code on Ubuntu/Linux. I installed the libary and started the Code. The result is an empty array. On my windows device the result is the correct code data. Can anyone try the Code on there linux device and if it works tell me the way to install the pyzbar libary correct, i think there is the problem. If you have any other ideas, please let me know.
Image(Code)=https://www.directupload.net/file/d/5519/tsv8hg76_jpg.htm
I tried the Code on windows and it works perfectly fine.
On ubuntu i installed the libary like this https://pypi.org/project/pyzbar/
import cv2
import pyzbar.pyzbar as pyzbar
Image = cv2.imread("wfunktioniert.jpg",0)
decodedObjects = pyzbar.decode(Image)
print(decodedObjects)
print("Ende")
No errors, just no correct result
I encounter this problem today. One picture decoded fine on my MAC but not decoded on my ubuntu server.
I finally find out that my ubuntu server RAM is too small, it's only 2G, but my MAC has 16G. And I created a VM with 4G RAM, test the picture again with pyzbar, it decoded successfully.
So I think you can try to add more RAM to your machine.
Or you can try zxing. I found out that zxing decoded the picture with 2G RAM.
You can use
from pyzbar import pyzbar to import the module
And to decode
barcode=pyzbar.decode(image)
Zbar opencv link
Follow the link.
I'm working on an Udoo trying to get the camera to take a picture that I can manipulate inside Python.
So far, the camera works with
gst-launch-1.0 imxv4l2videosrc ! imxipuvideosink
I can also take a single picture with
gst-launch-1.0 imxv4l2videosrc num-buffers=1 ! video/x-raw ! jpegenc ! filesink location=output.jpg
From here it seems like you can read straight from a gstreamer stream in Python with OpenCV.
Here is my python code:
import cv2
cam = cv2.VideoCapture("imxv4l2videosrc ! video/x-raw ! appsink")
ret, image = cam.read()
However, ret is False, and image is nothing.
Some places say this only works with OpenCV 3.0+, and others say 2.4.x, but I can't seem to find an actual answer to what version it works on.
If I need to update to OpenCV 3.0, which part to I update? I downloaded OpenCV via the apt repositories under the package python-opencv. So do I need to update Python? Can I just build OpenCV from source, and Python will automatically be using the newest version? I'm so confused.
The Ubuntu/Debian version is old 2.4.x, to get the last one you need to compile it from source.
Here two tutorials on how to do that:
https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_setup/py_setup_in_fedora/py_setup_in_fedora.html#installing-opencv-from-source
http://www.pyimagesearch.com/2015/07/20/install-opencv-3-0-and-python-3-4-on-ubuntu/
The first is for Python 2.7 on Fedora, the second for Python 3.4 on Ubuntu.
I'm trying to write an app using python and PIL that involves processing frames from my macbook camera. Is there an easy way to capture frames from the iSight for use with PIL without installing too many libraries?
I've seen a few questions on SO about doing this with OpenCV, which I don't want to do. Can I do this without using OpenCV as I've had trouble installing that?
Maybe you can try using the SimpleCV library. After installing it, the following code should work:
from SimpleCV import Camera
from SimpleCV import Image
webcam_camera = Camera()
webcam_image = webcam_camera.getImage()
webcam_image.save("frame.jpg")
Although OpenCV would be a much better option. If you follow the instructions on their website, it wouldn't be too hard to get it up and running.
I am very new to raspberry pi and python.
I am trying write a progam using python on raspberry pi to use the Kinect. I aim to install OpenKinect on Raspberry pi.
So far I have done:
apt-cache search OpenKinect
sudo apt-get install python-freenect
sudo apt-get update
Next i tried writing a code in python from this link https://github.com/OpenKinect/libfreenect/blob/master/wrappers/python/demo_cv_async.py
When i try to run the programe, it says that
error in line 5,
import cv.
ImportError:no module named cv.
I am not sure if i have installed all the necessary files. I am also not sure what i have done wrong.
I also have been trying to look for tutorials on installing and using OpenKinect.
Congradtulations on starting python! That sounds like a complicated project to start on. You should probably try doing the tutorial first at python.org. I particularily like the google video tutorials (if you are a classroom kind of person): http://www.youtube.com/watch?v=tKTZoB2Vjuk
After that you can dig into more detailed stuff :)
It looks like you still dont have opencv package for python. Try to install it:
sudo apt-get install python-opencv
The OpenGL or GTK-Warning: Cannot open display. Or the other one you stated
Number of deviced found:1 GL thread write reg 0x0105 <= 0x00 freeglut(freenect-glview):
OpenGL GLX extension not supported by display ':o.o'
is because freenect does not support OpenGL. It probably uses EGL.
bmwesting (Brandt) wrote:
"The freenect library provides a demo for the Kinect called glview. The glview program will > not work with the Pi because the program is written using OpenGL. The Raspberry Pi only supports GLES through EGL.
It seems like you will be able to use libfreenect to grab the depth stream and rgb stream, > but will be unable to run the demo program because it uses the incorrect graphics API."
If you read through that thread, it should show the alternatives (i.e. ASUS XTion instead of Kinect). They reach 30fps at high (~ 1024x800) resolution for depth data if using console output mode. I plan to go for Xtion now too - and I hope to get deactivate as much as possible from the USB bus (as this seems to be the bottleneck, for the Kinect too I think).
When you install OpenCV using apt-get install python-opencv you are installing version 2. However, you can still use the methods from version 1 by doing so:
import cv2.cv as cv
I'm trying to write a simple python script to capture an image from a webcam using OpenCV. This is running on Ubuntu 11.10 32-bit.
when I run lsusb in the terminal i get:
Bus 002 Device 002: ID 045e:00f7 Microsoft Corp. LifeCam VX-1000
Which leads me to believe that the driver for the camera is installed
In a python shell I type:
capture=cv.CaptureFromCAM(0) # also tried -1, 1, 2, 3
but capture is always null.
I also tried:
capture = cv.CreateCameraCapture(0)
But i get the same results.
Would appreciate any help
Cheers,
Merely probing the driver does not validate that the camera will work.
Here is a ubuntu support page on testing your camera with vlc
Basically you should try something like :
$ vlc v4l2:///dev/video0
I don't think this camera is supported by OpenCV.
OpenCV has a compatibility list, check if yours is there.