I have a python script that's using the gmusicapi and vlc libs to create a media player that takes requests based on text commands entered in a chat/VOIP program. On Windows 10/ in python is there a way to take the audio ONLY coming from the python script and route it to a virtual recording device?
I'm thinking maybe using the JACK lib but i'm not familiar with JACK yet
Essentially what SoundLeech does except instead of writing to a file route the audio to a virtual recording device.
Win 10/ Python 2.7
I personally tried pyAudio, pyAudioAnalyzer and sounddevice
Since you're running python 3.6 and lower you can use pyAudio but I would still recommmend sounddevice ( I find it more flexible and easy to use, but that is just my opinion )
py -m pip install sounddevice
then in your python code
import sounddevice
I've been using it mainly to list audio devices and change defaults values and stuff but you should be able to route one to another.
Maybe with:
sounddevice.get_stream
Related
I am trying to use porcupine on my Jetson Nano as a wake word.
In order to do this I need to record audio in pcm format(which I believe is raw format) using python.
I also need the sample rate to be 16,000 and 16bit linearly encoded on single channel. My input device index is 11.
So, How would I be able to record audio in this format using python?
It looks like there is a demo already setup for this from porcupine's side.
Checkout their demo here - it's a lot of code so I won't paste it all.
Essentially it requires installing the pvporcupinedemo package:
$ sudo pip3 install pvporcupinedemo
And then running the demo script (located in the Python demo) to start running the processing:
$ porcupine_demo_mic --access_key ${ACCESS_KEY} --keywords picovoice
There are various arguments to this script, which can be found documented in the repo itself.
The demo explicitly states that this should work for the Jetson Nano:
Runs on Linux (x86_64), Mac (x86_64 and arm64), Windows (x86_64),
Raspberry Pi (all variants), NVIDIA Jetson (Nano), and BeagleBone.
To make sure the demo detects your microphone, you can run the detect mic script flag:
$ porcupine_demo_mic --show_audio_devices
And you should see something like:
index: 0, device name: USB Audio Device
index: 1, device name: MacBook Air Microphone
Then you can determine which mic is correct, and use the index as an argument to the demo, e.g. for the "USB Audio Device":
$ porcupine_demo_mic --access_key ${ACCESS_KEY} --keywords picovoice --audio_device_index 0
I would then go ahead and start picking apart the code in their demo to modify it as required.
I have a simulation inside Blender that I would like to control externally using the TensorFlow library. The complete process would go something like this:
while True:
state = observation_from_blender()
action = find_action_using_tensorflow_neural_net(state)
take_action_inside_blender(action)
I don't have much experience with the threading or subprocess modules and so am unsure as to how I should go about actually building something like this.
Rather than mess around with Tensorflow connections and APIs, I'd suggest you take a look at the Open AI Universe Starter Agent[1]. The advantage here is that as long as you have a VNC session open, you can connect a TF based system to do reinforcement learning on your actions.
Once you have a model constructed via this, you can focus on actually building a lower level API system for the two things to talk to each other.
[1] https://github.com/openai/universe-starter-agent
Thanks for your response. Unfortunately, trying to get Universe working with my current setup was a bit of a pain. I'm also on a fairly tight deadline so I just needed something that worked straight away.
I did find a somewhat DIY solution that worked well for my situation using the pickle module. I don't really know how to convert this approach into proper pseudocode, so here is the general outline:
Process 1 - TensorFlow:
load up TF graph
wait until pickled state arrives
while not terminated:
Process 2 - Blender:
run simulation until next action required
pickle state
wait until pickled action arrives
Process 1 - TensorFlow:
unpickle state
feedforward state through neural net, find action
pickle action
wait until next pickled state arrives
Process 2 - Blender:
unpickle action
take action
This approach worked well for me, but I imagine there are more elegant low level solutions. I'd be curious to hear of any other possible approaches that achieve the same goal.
I did this to install tensorflow with the python version that comes bundled with blender.
Blender version: 2.78
Python version: 3.5.2
First of all you need to install pip for blender's python. For that I followed instructions mentioned in this link: https://blender.stackexchange.com/questions/56011/how-to-use-pip-with-blenders-bundled-python. (What I did was drag the python3.5m icon into the terminal and then further type the command '~/get-pip.py').
Once you have pip installed you are all set up to install 3rd party modules and use them with blender.
Navigate to the bin folder insider '/home/path to blender/2.78/' directory. To install tensorflow, drag the python3.5m icon into terminal, then drag pip3 icon into terminal and give the command install tensorflow.
I got an error mentioning module lib2to3 not found. Even installing the module didn't help as I got the message that no such module exists. Fortunately, I have python 3.5.2 installed on my machine. So navigate to /usr/lib/python3.5 and copy the lib2to3 folder from there to /home/user/blender/2.78/python/lib/python3.5 and then again run the installation command for tensorflow. The installation should complete without any errors. Make sure that you test the installation by importing tensorflow.
I have a weather station and i need to write a program which can take the instrument readings and save them in a text file. I decided to try and use pywws to retrieve the data but i cannot import the module into python to use its commands. I am doing this on a raspberry pi model B and i am using python 3 and the latest version of pywws. When i try to use import pywws it says that it does not exist. I am using a USB wireless weather forecaster from maplin and so far i have been using this tutorial to set it up: http://www.weather.dragontail.co.uk/index.php?page=pywws_ini
I fixed this problem, turns out that pywws does not work with python 3, it only works with python 2. I booted up python 2 and typed "import pywws" and it worked fine.
pywws definitely does work with python 3, as long as you install it correctly. Python 2 to Python 3 translation is done by the setup.py script during installation.
I tried
import pyaudio
p = pyaudio.PyAudio()
for i in range(p.get_device_count()):
print p.get_device_info_by_index(i)
but I don't get the full list of all devices : for example I don't get ASIO devices in this list. This is strange, because portaudio should give ASIO devices as well, right ?
How can I list all audio devices with pyaudio ?
I've created (a while after this question was posted) the sounddevice module for Python, which includes its own DLLs with ASIO support (and all other host APIs, too).
It can be installed with:
pip install sounddevice --user
After that, you can list all your devices with:
python -m sounddevice
Of course you can also do this within Python:
import sounddevice as sd
sd.query_devices()
I think your expectations are reasonable. The equivalent C code to enumerate PortAudio devices would give you all available devices. There are a couple of things that could be wrong:
Your build of PyAudio has not been compiled with ASIO support. PortAudio will only enumerate devices for the native host APIs that have been configured/compiled in at compile time.
You have a 64-bit build of Python/PyAudio and your ASIO device drivers are 32-bit, or vis-versa (64-bit ASIO drivers and 32-bit Python).
As Multimedia Mike suggests, you can eliminate PyAudio from the equation by enumerating PA devices from C. The pa_devs.c program in the PortAudio distribution does this.
I'm thinking that the problem could be in the underlying PortAudio library. Do you have (or can you write, in C) a simple utility that accesses the PortAudio library and tries to perform this same listing?
Also, googling for 'portaudio asio' reveals this tidbit from the official PortAudio docs:
There are cases where PortAudio is limited by the capabilities of the
underlying native audio API... the ASIO SDK only allows one device to
be open at a time, so PortAudio/ASIO doesn't currently support opening
multiple ASIO devices simultaneously.
I am very new to raspberry pi and python.
I am trying write a progam using python on raspberry pi to use the Kinect. I aim to install OpenKinect on Raspberry pi.
So far I have done:
apt-cache search OpenKinect
sudo apt-get install python-freenect
sudo apt-get update
Next i tried writing a code in python from this link https://github.com/OpenKinect/libfreenect/blob/master/wrappers/python/demo_cv_async.py
When i try to run the programe, it says that
error in line 5,
import cv.
ImportError:no module named cv.
I am not sure if i have installed all the necessary files. I am also not sure what i have done wrong.
I also have been trying to look for tutorials on installing and using OpenKinect.
Congradtulations on starting python! That sounds like a complicated project to start on. You should probably try doing the tutorial first at python.org. I particularily like the google video tutorials (if you are a classroom kind of person): http://www.youtube.com/watch?v=tKTZoB2Vjuk
After that you can dig into more detailed stuff :)
It looks like you still dont have opencv package for python. Try to install it:
sudo apt-get install python-opencv
The OpenGL or GTK-Warning: Cannot open display. Or the other one you stated
Number of deviced found:1 GL thread write reg 0x0105 <= 0x00 freeglut(freenect-glview):
OpenGL GLX extension not supported by display ':o.o'
is because freenect does not support OpenGL. It probably uses EGL.
bmwesting (Brandt) wrote:
"The freenect library provides a demo for the Kinect called glview. The glview program will > not work with the Pi because the program is written using OpenGL. The Raspberry Pi only supports GLES through EGL.
It seems like you will be able to use libfreenect to grab the depth stream and rgb stream, > but will be unable to run the demo program because it uses the incorrect graphics API."
If you read through that thread, it should show the alternatives (i.e. ASUS XTion instead of Kinect). They reach 30fps at high (~ 1024x800) resolution for depth data if using console output mode. I plan to go for Xtion now too - and I hope to get deactivate as much as possible from the USB bus (as this seems to be the bottleneck, for the Kinect too I think).
When you install OpenCV using apt-get install python-opencv you are installing version 2. However, you can still use the methods from version 1 by doing so:
import cv2.cv as cv