Importing audio track (wav or aiff) in Python - python

I have an audio track in AIFF format. I would like to open this audio file with Python, and import the amplitudes of the sound and perform some mathematical analysis such as Fourier Transform, etc.
Is this possible in Python?
Are there libraries or modules, which allow me to acquire an audio file?
Throughout my search, I have found scipy.io.wavfile, which works for WAV audio files.
Are there other libraries to import audio files in Python?
Is there something similar for AIFF files?
Obviously, I can convert the AIFF into a WAV file, but I would like to import the AIFF file directly, if possible.
As a side question: are there some more specific (by specific, I mean better than Python) programming languages to perform such kind of analysis and acquisition of audio files?

Python comes with AIFF support as part of the standard library -- see the aifc module.
This module provides support for reading and writing AIFF and AIFF-C
files. AIFF is Audio Interchange File Format, a format for storing
digital audio samples in a file. AIFF-C is a newer version of the
format that includes the ability to compress the audio data.
Depending on what your end goals are, you may be more productive using a tool like PureData that's designed just for working with audio and has things like reading audio files and performing ffts as primitives.

Yes, I also came across this problem using scipy.io.wavfile. I looked up the problem and see that Scikits might be interesting to get around this wave only solution.
https://sites.google.com/site/ldpyproject/scikits-audiolab
As for Pure Data I use this a lot, but of course it does depend on what you wishing to do with your sound file...?

Related

Converting .ul files

I recently copied a bunch of audio files, which are feedback left during a phone call.
The vast majority of them are mp3, but a small percentage are files ending in a .ul extension, which I believe is ULAW.
I have tried to play them in Audacity and VLC, but get garbled sounds. I suspect they are corrupted, but I'd like to confirm that by attempting to convert them to another audio format.
Would anyone be able to recommend a library to do that?
I know Python has the audioop module but I do not know enough to start messing with the audio data.

Exporting Audio for Google Speech using pydub

I'm trying to export audio files to LINEAR16 for Google Speech and I notice that they specify little-endian byte ordering. I'm using pydub to export to 'raw' format, but I can't tell from the documentation (or the source) whether the exported files are in little or big endian format?
I'm using the following command for exporting:
audio = pydub.from_file(self.mFilePathName, "mp4")
fullFileNameRaw = "audio.raw"
audio.export(fullFileNameRaw, format='raw')
Thank you.
-K
According to this answer, standard (RIFF) wave files are little endian. Pydub uses the stdlib wavemodule to write wave files, so I'm guessing it is little endian. (if you write the file with the wave headers it does in fact have RIFF at the beginning).
Looking into it a little further though, it seems like it may depend on the hardware platform's endianness. x86 and AMD64 are both little endian though so that covers basically all the places people would run pydub (I think?)

Pure python library for MIDI to Score (Notes) and/or Audio Translation

I want something that abstracts away MIDI events, to extract/synthesize pitch/duration/dynamic/onset (e.g. loud D# quarter note on the 4th beat).
fluidsynth and timidity work, but I'd prefer a pure python library. I can't find anything but bindings here.
midiutil makes MIDIs and pygame plays them, but I want something that can both synthesize raw audio data and quantize the notes (i.e. as they would be represented in sheet music, not as midi events / pulses / "pitch" / etc).
EDIT: these don't quite do it (either not in python, or too low-level, or "do it yourself"):
Get note data from MIDI file
Python: midi to audio stream
What you probably want is a process called "quantization" which matches the midi events to the closest note length.
I wrote such an app in C 1999:
http://www.findthatzipfile.com/search-3558240-hZIP/winrar-winzip-download-midi2tone.zip.htm
(I don't have source any more, sorry)
The process itself is not very complex. I just brute forced different note lengths to find the closest match. MIDI event pitches themselves map directly to notes, so conversation there is not neede.d
MIDI format itself is not very complex, so I suggest you find a pure Python MIDI reading library and then apply the algorithm on the top of that.
https://github.com/vishnubob/python-midi
Have you tried Mingus? It with py FluidSynth http://code.google.com/p/mingus/wiki/tutorialFluidsynth

Python Creating raw audio

I use Windows 7. All I want to do is create raw audio and stream it to a speaker. After that, I want to create classes that can generate sine progressions (basically, a tone that slowly gets more and more shrill). After that, I want to put my raw audio into audio codecs and containers like .WAV and .MP3 without going insane. How might I be able to achieve this in Python without using dependencies that don't come with a standard install?
I looked up a great deal of files, descriptions, and related questions from here and all over the internet. I read about PCM and ADPCM, as well as A/D Converters. Where I get lost is somewhere between the ratio of byte input --> Kbps output, and all that stuff.
Really, all I want is for somebody to please be able to point me in the right direction to learn the audio formats precisely, and how to use them in Python (but first I want to start with raw audio).
This questions really has 2 parts:
How do I generate audio signals
How do I play audio signals through the speakers.
I wrote a simple wrapper around the python std lib's wave module, called pydub, which you can look at (on github) as a point of reference for how to manipulate raw audio data.
I generally just export the audio data to a file and then play it using VLC player. IMHO there's no reason to write a bunch of code to playback audio unless you're making a synthesizer or a game or some other realtime app.
Anyway, I hope that helps you get started :)

How do I mix audio files using python?

I would like to do basic audio mixing in python.
To give an example: I would like to take two mp3 files and add them together and return one mp3 file. Another example: I would like to take the first ten seconds of one mp3 file and add it to the beginning of another mp3 file.
What is the best way to accomplish these tasks? I would like to use built in python functions like audioop but can not find any good tutorials or sample code out there for using the built in functions.
I am going through the docs but I am pretty confused and cannot figure out how to do things like this. I am not even sure that the python libraries like mp3's. Most of the stuff I have looked at seem to refer to WAV files. So, if this is the case, I guess a follow up question would be is there an easy way to convert an mp3 to a WAV for manipulation and back again?
You can do this pretty easily using pydub:
from pydub import AudioSegment
sound1 = AudioSegment.from_mp3("/path/to/file1.mp3")
sound2 = AudioSegment.from_mp3("/path/to/file1.mp3")
# mix sound2 with sound1, starting at 5000ms into sound1)
output = sound1.overlay(sound2, position=5000)
# save the result
output.export("mixed_sounds.mp3", format="mp3")
You could check out some of the code in the python audiotools project. It is a collection of command-line utilities which make use of a common python package. There is a utility included with audiotools (trackcat) which can con*cat*enate two or more audio tracks; another (tracksplit) can split an audio track (using a .cue file). These, as well as the numerous other included utilities, can work with audio files of various encodings, including mp3.
The way I've done this in the past is just use subprocess. and call sox.
E.g. subprocess.call(["sox", "in.1.mp3", "in.2.mp3", "out.mp3"])

Categories