I have IR sensor which have TRS connector and I can record my remotes signals into audio.
Now I want to control my computer with TV remote, but I don't have any clue how to compare audio input with pre-recorded audio. But after I realized that these audio waves contains only some kind data (binary) I can turn these into binary or hex, so it is much easier to compare.
Waves look just like this:
And this:
These are records of "OK" button, sometimes there are some impulses on right channel too and I don't know why, it seems like connections in sensor are damaged maybe.
Ok thats not matter, anyway
I need help with python program which read these impulses and turn these into binary, in realtime from audio input(mic).
I know it's sounds like "Do it for me, while I enjoy my life", but I don't have experiences with sound transforming/reading... I've looking for python examples for recording and reading audio, but unsuccessfully.
The this is quite easy if you can forgo the real time requirement: just save the data as a .wav file, and then read it in using Python's wave module.
Here's an example of how to read a wav file in Python,
import wave
w = wave.open("myfile.wav", "rb")
binary_data = w.readframes(w.getnframes())
w.close()
It's possible to do this in real time, but it's harder, though still not super difficult. For real time, I use PyAudio, and a good start would be to follow the examples in the demos. In these you basically open a stream and read small chunks at a time, and if you want any interactivity, you need to do this in a thread.
(Also, note, that the sound card will filter your audio inputs, so what you see isn't going to be the true input signal. In particular, I think remote controls often have a carrier frequency around 40KHz, which is higher than human hearing, so I doubt that sound cards work well in this range, though they may be sufficient depending on what you want to do.)
Related
Using python - I want to be able to sample in 2 second chunks, the audio that is being played through the output audio device on the computer.
My goal is to be able to detect a specific noise that comes through the speakers of a PC using Python, and this is the first step in doing so.
I have taken a look at the sounddevice docs but can't seem to determine the correct way to achieve this behaviour, the documentation doesn't appear to cover this.
Please can somebody help out?
It appears I am looking for some form of audio lookback recording in python.
I recently copied a bunch of audio files, which are feedback left during a phone call.
The vast majority of them are mp3, but a small percentage are files ending in a .ul extension, which I believe is ULAW.
I have tried to play them in Audacity and VLC, but get garbled sounds. I suspect they are corrupted, but I'd like to confirm that by attempting to convert them to another audio format.
Would anyone be able to recommend a library to do that?
I know Python has the audioop module but I do not know enough to start messing with the audio data.
For my final year project in college I am working with Wav Files and Python and messing around with them. I would love to be able to play the sound samples from memory rather than write the sound samples out in to a WAV file before I can hear them.
I have been looking for weeks online and have found PyMedia, PySound, PyGame etc and none of them seem to work for me. Every single package gives me errors.
Are there other libraries I am missing that would help me do this ? Or am I just being stupid and can't get the other packages to work.
Exactly what I want to do is along the lines of this:
#open file and get parameters
wavfile = Wave.open("file.wav", "r")
params = wfile.getparams()
nframes = params[3]
#get sound samples in a list
samples = []
for i in range(nframes):
samples.append(wfile.readframes(1))
playsound(samples)
changedSamples = makeChangeTo(samples)
playsound(changedSamples)
And I would like to be able to have this in a loop so I can edit and hear the edits while the program is still runnning without having to write the samples to a wav file before being able to hear it as that takes too long.
Any suggestions ? Cheers !
You should clearly separate those two concerns:
Reading/writing WAV files (or other audio files)
Playing/recording sounds
There are several questions and answers to both topics here on SO.
This is my personal (and of course biased) recommendation:
You should use NumPy to manipulate sounds, it's much easier than handling plain Python buffers.
If for some reason you cannot use NumPy, you can still do all this, but it will be a bit more work.
For reading/writing sound files, I recommend the soundfile module (full disclosure: I'm a co-author).
For playing/recording sounds, I recommend the sounddevice module (full disclosure: I'm its main author).
When using those modules, your example would probably become something like this:
import soundfile as sf
import sounddevice as sd
samples, samplerate = sf.read('file.wav')
sd.play(samples, samplerate)
sd.wait()
changed_samples = make_change_to(samples)
sd.play(changed_samples, samplerate)
sd.wait()
If you work in an interactive Python prompt, you probably don't need the sd.wait() calls, you can just wait until the playback has finished. Or, if you get bored of listening to it, you can use:
sd.stop()
If you know that you will use the same sampling rate for some time, you can set it as default:
sd.default.samplerate = 48000
After that, you can drop the samplerate argument when using play():
sd.play(samples)
If you want to store the changed sound to a file, you can use something like this:
sf.write('changed_file.wav', changed_samples, samplerate)
Further reading:
different options for reading/writing audio files
different options for playback/recording
a very basic tutorial about handling audio signals
I want something that abstracts away MIDI events, to extract/synthesize pitch/duration/dynamic/onset (e.g. loud D# quarter note on the 4th beat).
fluidsynth and timidity work, but I'd prefer a pure python library. I can't find anything but bindings here.
midiutil makes MIDIs and pygame plays them, but I want something that can both synthesize raw audio data and quantize the notes (i.e. as they would be represented in sheet music, not as midi events / pulses / "pitch" / etc).
EDIT: these don't quite do it (either not in python, or too low-level, or "do it yourself"):
Get note data from MIDI file
Python: midi to audio stream
What you probably want is a process called "quantization" which matches the midi events to the closest note length.
I wrote such an app in C 1999:
http://www.findthatzipfile.com/search-3558240-hZIP/winrar-winzip-download-midi2tone.zip.htm
(I don't have source any more, sorry)
The process itself is not very complex. I just brute forced different note lengths to find the closest match. MIDI event pitches themselves map directly to notes, so conversation there is not neede.d
MIDI format itself is not very complex, so I suggest you find a pure Python MIDI reading library and then apply the algorithm on the top of that.
https://github.com/vishnubob/python-midi
Have you tried Mingus? It with py FluidSynth http://code.google.com/p/mingus/wiki/tutorialFluidsynth
I use Windows 7. All I want to do is create raw audio and stream it to a speaker. After that, I want to create classes that can generate sine progressions (basically, a tone that slowly gets more and more shrill). After that, I want to put my raw audio into audio codecs and containers like .WAV and .MP3 without going insane. How might I be able to achieve this in Python without using dependencies that don't come with a standard install?
I looked up a great deal of files, descriptions, and related questions from here and all over the internet. I read about PCM and ADPCM, as well as A/D Converters. Where I get lost is somewhere between the ratio of byte input --> Kbps output, and all that stuff.
Really, all I want is for somebody to please be able to point me in the right direction to learn the audio formats precisely, and how to use them in Python (but first I want to start with raw audio).
This questions really has 2 parts:
How do I generate audio signals
How do I play audio signals through the speakers.
I wrote a simple wrapper around the python std lib's wave module, called pydub, which you can look at (on github) as a point of reference for how to manipulate raw audio data.
I generally just export the audio data to a file and then play it using VLC player. IMHO there's no reason to write a bunch of code to playback audio unless you're making a synthesizer or a game or some other realtime app.
Anyway, I hope that helps you get started :)