High level audio crossfading library for python - python

I am looking for a high level audio library that supports crossfading for python (and that works in linux). In fact crossfading a song and saving it is about the only thing I need.
I tried pyechonest but I find it really slow. Working with multiple songs at the same time is hard on memory too (I tried to crossfade about 10 songs in one, but I got out of memory errors and my script was using 1.4Gb of memory). So now I'm looking for something else that works with python.
I have no idea if there exists anything like that, if not, are there good command line tools for this, I could write a wrapper for the tool.

A list of Python sound libraries.
Play a Sound with Python
PyGame or Snack would work, but for this, I'd use something like audioop.
— basic first steps here : merge background audio file

A scriptable solution using external tools AviSynth and avs2wav or WAVI:
Create an AviSynth script file:
test.avs
v=ColorBars()
a1=WAVSource("audio1.wav").FadeOut(50)
a2=WAVSource("audio2.wav").Reverse.FadeOut(50).Reverse
AudioDub(v,a1+a2)
Script fades out on audio1 stores that in a1 then fades in on audio2 and stores that in a2.
a1 & a2 are concatenated and then dubbed with a Colorbar screen pattern to make a video.
You can't just work with audio alone - a valid video must be generated.
I kept the script as simple as possible for demonstration purposes. Google for more details on audio processing via AviSynth.
Now using avs2wav (or WAVI) you can render the audio:
avs2wav.exe test.avs combined.wav
or
wavi.exe test.avs combined.wav
Good luck!
Some references:
How to edit with Avisynth
AviSynth filters reference

Related

automatically generate a play using python

To save a lot of work, I'm trying to generate a sound file of a script for a play that includes several different voices. These voices should be computer generated. I have software (NaturalReader13) for generating these voices. Since I don't want the entire play read in one voice, I can't upload the whole text into NaturalReader and export it.
I could export several voice files and then mix them into a coherent whole, but this takes a long time and patience. I tried this already using Audacity to mix. Instead, I want to automate this by interfacing with the program using Python. I have no idea how to do this.
There are free voice generators online that would work for this task, but they are much lower quality. If interfacing with NaturalReaders is too complex, getting the data from the web might be easier.
So basically, the script I have is of the form:
Character: These are the lines that need to be read...
Any ideas on how to approach this?

Python WAV audio play *Without External Libraries*

I have been having some issues for the past few days to get this to work. All I need is to learn it once to get it later. So what I need is a good example of working source code to play a simple wav file. I do not want to use an external library only to get this to work. I honestly don't see the point in getting a huge library to substitute for one problem :/. So if I can get a (Once again, NON-EXTERNAL) example, that would be great. (I'm using windows, so winsound should work, but I can't get the winsound.PlaySound('Example'.wav, SND_FILENAME) thing to work.) Thanks!

Pure python library for MIDI to Score (Notes) and/or Audio Translation

I want something that abstracts away MIDI events, to extract/synthesize pitch/duration/dynamic/onset (e.g. loud D# quarter note on the 4th beat).
fluidsynth and timidity work, but I'd prefer a pure python library. I can't find anything but bindings here.
midiutil makes MIDIs and pygame plays them, but I want something that can both synthesize raw audio data and quantize the notes (i.e. as they would be represented in sheet music, not as midi events / pulses / "pitch" / etc).
EDIT: these don't quite do it (either not in python, or too low-level, or "do it yourself"):
Get note data from MIDI file
Python: midi to audio stream
What you probably want is a process called "quantization" which matches the midi events to the closest note length.
I wrote such an app in C 1999:
http://www.findthatzipfile.com/search-3558240-hZIP/winrar-winzip-download-midi2tone.zip.htm
(I don't have source any more, sorry)
The process itself is not very complex. I just brute forced different note lengths to find the closest match. MIDI event pitches themselves map directly to notes, so conversation there is not neede.d
MIDI format itself is not very complex, so I suggest you find a pure Python MIDI reading library and then apply the algorithm on the top of that.
https://github.com/vishnubob/python-midi
Have you tried Mingus? It with py FluidSynth http://code.google.com/p/mingus/wiki/tutorialFluidsynth

Python Creating raw audio

I use Windows 7. All I want to do is create raw audio and stream it to a speaker. After that, I want to create classes that can generate sine progressions (basically, a tone that slowly gets more and more shrill). After that, I want to put my raw audio into audio codecs and containers like .WAV and .MP3 without going insane. How might I be able to achieve this in Python without using dependencies that don't come with a standard install?
I looked up a great deal of files, descriptions, and related questions from here and all over the internet. I read about PCM and ADPCM, as well as A/D Converters. Where I get lost is somewhere between the ratio of byte input --> Kbps output, and all that stuff.
Really, all I want is for somebody to please be able to point me in the right direction to learn the audio formats precisely, and how to use them in Python (but first I want to start with raw audio).
This questions really has 2 parts:
How do I generate audio signals
How do I play audio signals through the speakers.
I wrote a simple wrapper around the python std lib's wave module, called pydub, which you can look at (on github) as a point of reference for how to manipulate raw audio data.
I generally just export the audio data to a file and then play it using VLC player. IMHO there's no reason to write a bunch of code to playback audio unless you're making a synthesizer or a game or some other realtime app.
Anyway, I hope that helps you get started :)

How do I mix audio files using python?

I would like to do basic audio mixing in python.
To give an example: I would like to take two mp3 files and add them together and return one mp3 file. Another example: I would like to take the first ten seconds of one mp3 file and add it to the beginning of another mp3 file.
What is the best way to accomplish these tasks? I would like to use built in python functions like audioop but can not find any good tutorials or sample code out there for using the built in functions.
I am going through the docs but I am pretty confused and cannot figure out how to do things like this. I am not even sure that the python libraries like mp3's. Most of the stuff I have looked at seem to refer to WAV files. So, if this is the case, I guess a follow up question would be is there an easy way to convert an mp3 to a WAV for manipulation and back again?
You can do this pretty easily using pydub:
from pydub import AudioSegment
sound1 = AudioSegment.from_mp3("/path/to/file1.mp3")
sound2 = AudioSegment.from_mp3("/path/to/file1.mp3")
# mix sound2 with sound1, starting at 5000ms into sound1)
output = sound1.overlay(sound2, position=5000)
# save the result
output.export("mixed_sounds.mp3", format="mp3")
You could check out some of the code in the python audiotools project. It is a collection of command-line utilities which make use of a common python package. There is a utility included with audiotools (trackcat) which can con*cat*enate two or more audio tracks; another (tracksplit) can split an audio track (using a .cue file). These, as well as the numerous other included utilities, can work with audio files of various encodings, including mp3.
The way I've done this in the past is just use subprocess. and call sox.
E.g. subprocess.call(["sox", "in.1.mp3", "in.2.mp3", "out.mp3"])

Categories