I am very new to raspberry pi and python.
I am trying write a progam using python on raspberry pi to use the Kinect. I aim to install OpenKinect on Raspberry pi.
So far I have done:
apt-cache search OpenKinect
sudo apt-get install python-freenect
sudo apt-get update
Next i tried writing a code in python from this link https://github.com/OpenKinect/libfreenect/blob/master/wrappers/python/demo_cv_async.py
When i try to run the programe, it says that
error in line 5,
import cv.
ImportError:no module named cv.
I am not sure if i have installed all the necessary files. I am also not sure what i have done wrong.
I also have been trying to look for tutorials on installing and using OpenKinect.
Congradtulations on starting python! That sounds like a complicated project to start on. You should probably try doing the tutorial first at python.org. I particularily like the google video tutorials (if you are a classroom kind of person): http://www.youtube.com/watch?v=tKTZoB2Vjuk
After that you can dig into more detailed stuff :)
It looks like you still dont have opencv package for python. Try to install it:
sudo apt-get install python-opencv
The OpenGL or GTK-Warning: Cannot open display. Or the other one you stated
Number of deviced found:1 GL thread write reg 0x0105 <= 0x00 freeglut(freenect-glview):
OpenGL GLX extension not supported by display ':o.o'
is because freenect does not support OpenGL. It probably uses EGL.
bmwesting (Brandt) wrote:
"The freenect library provides a demo for the Kinect called glview. The glview program will > not work with the Pi because the program is written using OpenGL. The Raspberry Pi only supports GLES through EGL.
It seems like you will be able to use libfreenect to grab the depth stream and rgb stream, > but will be unable to run the demo program because it uses the incorrect graphics API."
If you read through that thread, it should show the alternatives (i.e. ASUS XTion instead of Kinect). They reach 30fps at high (~ 1024x800) resolution for depth data if using console output mode. I plan to go for Xtion now too - and I hope to get deactivate as much as possible from the USB bus (as this seems to be the bottleneck, for the Kinect too I think).
When you install OpenCV using apt-get install python-opencv you are installing version 2. However, you can still use the methods from version 1 by doing so:
import cv2.cv as cv
Related
I would like to get windows 10 device specifications pen and touch properties with python
I try to use two ways :
systeminfo in command line
msinfo32
Both ways did not find the answer,Is there any way to get this?
I expect that there can be Python Package or other methods
you can use pyglet module to get all devices connected to your pc/laptop
first install pyglet pip install pyglet
code
import pyglet
devices = pyglet.input.get_devices()
print(devices)
reference question
here
pyglet docs
I want to have python save an image file of the whole screen as a variable with ctypes so that I could access the screen in a program and do something with it (like put it on a pygame window). I can't use any other libraries unless they are included with python (no installing or pip). Does anyone know how to do this?
Edit: I'm using windows 10.
PIL.ImageGrab is from PILLOW (a python image library fork which you can install with pip). You can give a bounding box or capture the entire screen.
Update: OP now mentions he can't use external libraries.
Then you could virtually hit printscreen and read the clipboard. The code of PILLOW is open-source feel free to use it.
Remember that you can always call a command from within python:
>>> import os
>>> os.system("pip install pillow")
Or download the zip of the library and import it in your code.
I have a simulation inside Blender that I would like to control externally using the TensorFlow library. The complete process would go something like this:
while True:
state = observation_from_blender()
action = find_action_using_tensorflow_neural_net(state)
take_action_inside_blender(action)
I don't have much experience with the threading or subprocess modules and so am unsure as to how I should go about actually building something like this.
Rather than mess around with Tensorflow connections and APIs, I'd suggest you take a look at the Open AI Universe Starter Agent[1]. The advantage here is that as long as you have a VNC session open, you can connect a TF based system to do reinforcement learning on your actions.
Once you have a model constructed via this, you can focus on actually building a lower level API system for the two things to talk to each other.
[1] https://github.com/openai/universe-starter-agent
Thanks for your response. Unfortunately, trying to get Universe working with my current setup was a bit of a pain. I'm also on a fairly tight deadline so I just needed something that worked straight away.
I did find a somewhat DIY solution that worked well for my situation using the pickle module. I don't really know how to convert this approach into proper pseudocode, so here is the general outline:
Process 1 - TensorFlow:
load up TF graph
wait until pickled state arrives
while not terminated:
Process 2 - Blender:
run simulation until next action required
pickle state
wait until pickled action arrives
Process 1 - TensorFlow:
unpickle state
feedforward state through neural net, find action
pickle action
wait until next pickled state arrives
Process 2 - Blender:
unpickle action
take action
This approach worked well for me, but I imagine there are more elegant low level solutions. I'd be curious to hear of any other possible approaches that achieve the same goal.
I did this to install tensorflow with the python version that comes bundled with blender.
Blender version: 2.78
Python version: 3.5.2
First of all you need to install pip for blender's python. For that I followed instructions mentioned in this link: https://blender.stackexchange.com/questions/56011/how-to-use-pip-with-blenders-bundled-python. (What I did was drag the python3.5m icon into the terminal and then further type the command '~/get-pip.py').
Once you have pip installed you are all set up to install 3rd party modules and use them with blender.
Navigate to the bin folder insider '/home/path to blender/2.78/' directory. To install tensorflow, drag the python3.5m icon into terminal, then drag pip3 icon into terminal and give the command install tensorflow.
I got an error mentioning module lib2to3 not found. Even installing the module didn't help as I got the message that no such module exists. Fortunately, I have python 3.5.2 installed on my machine. So navigate to /usr/lib/python3.5 and copy the lib2to3 folder from there to /home/user/blender/2.78/python/lib/python3.5 and then again run the installation command for tensorflow. The installation should complete without any errors. Make sure that you test the installation by importing tensorflow.
I'm working on an Udoo trying to get the camera to take a picture that I can manipulate inside Python.
So far, the camera works with
gst-launch-1.0 imxv4l2videosrc ! imxipuvideosink
I can also take a single picture with
gst-launch-1.0 imxv4l2videosrc num-buffers=1 ! video/x-raw ! jpegenc ! filesink location=output.jpg
From here it seems like you can read straight from a gstreamer stream in Python with OpenCV.
Here is my python code:
import cv2
cam = cv2.VideoCapture("imxv4l2videosrc ! video/x-raw ! appsink")
ret, image = cam.read()
However, ret is False, and image is nothing.
Some places say this only works with OpenCV 3.0+, and others say 2.4.x, but I can't seem to find an actual answer to what version it works on.
If I need to update to OpenCV 3.0, which part to I update? I downloaded OpenCV via the apt repositories under the package python-opencv. So do I need to update Python? Can I just build OpenCV from source, and Python will automatically be using the newest version? I'm so confused.
The Ubuntu/Debian version is old 2.4.x, to get the last one you need to compile it from source.
Here two tutorials on how to do that:
https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_setup/py_setup_in_fedora/py_setup_in_fedora.html#installing-opencv-from-source
http://www.pyimagesearch.com/2015/07/20/install-opencv-3-0-and-python-3-4-on-ubuntu/
The first is for Python 2.7 on Fedora, the second for Python 3.4 on Ubuntu.
I have a python script that's using the gmusicapi and vlc libs to create a media player that takes requests based on text commands entered in a chat/VOIP program. On Windows 10/ in python is there a way to take the audio ONLY coming from the python script and route it to a virtual recording device?
I'm thinking maybe using the JACK lib but i'm not familiar with JACK yet
Essentially what SoundLeech does except instead of writing to a file route the audio to a virtual recording device.
Win 10/ Python 2.7
I personally tried pyAudio, pyAudioAnalyzer and sounddevice
Since you're running python 3.6 and lower you can use pyAudio but I would still recommmend sounddevice ( I find it more flexible and easy to use, but that is just my opinion )
py -m pip install sounddevice
then in your python code
import sounddevice
I've been using it mainly to list audio devices and change defaults values and stuff but you should be able to route one to another.
Maybe with:
sounddevice.get_stream