Only white noise in PyAudio blocking stream..? - python

I'm doing almost everything exactly as the pyaudio documentation describes. The main difference: in the stream loop, I am running my song data through a filter that outputs audio data as an array. The resulting output is only white noise. Here's my code:
import librosa as lr
y, sr = lr.load(song_filename)
# open stream
stream = p.open(format=p.get_format_from_width(wf.getsampwidth()),
channels=wf.getnchannels(),
rate=wf.getframerate(),
output=True)
while _:
x = my_filter(parameter, y=y[CHUNK])
stream.write(x)
CHUNK = updateCHUNK()
My guess is that librosa uses numpy array while pyaudio uses bytes. So I tried this:
while _:
x = my_filter(parameter, y=y[CHUNK])
x = x.tobytes()
stream.write(x)
CHUNK = updateCHUNK()
Nothing was fixed by .tobytes(). Can anyone suggest solutions to this issue?
Edit:
I realize that I'm not ready to look for solutions. Still trying to understand what the problem is.

Related

Python - Reading a large audio file to a stream?

The Question
I want to load an audio file of any type (mp3, m4a, flac, etc) and write it to an output stream.
I tried using pydub, but it loads the entire file at once which takes forever and runs out of memory easily.
I also tried using python-vlc, but it's been unreliable and too much of a black box.
So, how can I open large audio files chunk-by-chunk for streaming?
Edit #1
I found half of a solution here, but I'll need to do more research for the other half.
TL;DR: Use subprocess and ffmpeg to convert the file to wav data, and pipe that data into np.frombuffer. The problem is, the subprocess still has to finish before frombuffer is used.
...unless it's possible to have the pipe written to on 1 thread while np reads it from another thread, which I haven't tested yet. For now, this problem is not solved.
I think the python package https://github.com/irmen/pyminiaudio can be of helpful. You can stream an audio file like this
import miniaudio
audio_path = "my_audio_file.mp3"
target_sampling_rate = 44100 #the input audio will be resampled a this sampling rate
n_channels = 1 #either 1 or 2
waveform_duration = 30 #in seconds
offset = 15 #this means that we read only in the interval [15s, duration of file]
waveform_generator = miniaudio.stream_file(
filename = audio_path,
sample_rate = target_sampling_rate,
seek_frame = int(offset * target_sampling_rate),
frames_to_read = int(waveform_duration * target_sampling_rate),
output_format = miniaudio.SampleFormat.FLOAT32,
nchannels = n_channels)
for waveform in waveform_generator:
#do something with the waveform....
I know for sure that this works on mp3, ogg, wav, flac but for some reason it does not on mp4/acc and I am actually looking for a way to read mp4/acc

Numpy RFFT/IRFFT volume

I'm doing an rfft and irfft from a wave file:
samplerate, data = wavfile.read(location)
input = data.T[0] # first track of audio
fftData = np.fft.rfft(input[sample:], length)
output = np.fft.irfft(fftData).astype(data.dtype)
So it reads from a file and then does rfft. However it produces a lot of noise when I play the audio with py audio stream. I tried to search an answer to this question and used this solution:
rfft or irfft increasing wav file volume in python
That is why I have the .astype(data.dtype) when doing the irfft. However it doesn't reduce the noise, it reduced it a bit but still it sounds all wrong.
This is the playback, where p is the pyAudio:
stream = p.open(format=pyaudio.paFloat32,
channels=1,
rate=fs,
output=True)
stream.write(output)
stream.stop_stream()
stream.close()
p.terminate()
So what am I doing wrong here?
Thanks!
edit: Also I tried to use .astype(dtype=np.float32) when doing the irfft as the pyaudio uses that when streaming audio. However it was still noisy.
The best working solution this far seems to be normalization with median value and using .astype(np.float32) as pyAudio output is float32:
samplerate, data = wavfile.read(location)
input = data.T[0] # first track of audio
fftData = np.fft.rfft(input[sample:], length)
fftData = np.divide(fftData, np.median(fftData))
output = np.fft.irfft(fftData).astype(dtype=np.float32)
If anyone has better solutions I'd like to hear. I tried with mean normalization but it still resulted in clipping audio, normalization with np.max made the whole audio too low. This normalization problem with FFT is always giving me trouble and haven't found any 100% working solutions here in SO.

How to play a stereo aiff file with pyaudio?

This plays mono aifc files, but for any stereo files I get a loud blast of static:
import pyaudio
import aifc
CHUNK = 1024
wf = aifc.open('C:\\path_to_file.aiff', 'rb')
p = pyaudio.PyAudio()
stream = p.open(format=p.get_format_from_width(wf.getsampwidth()),
channels=wf.getnchannels(),
rate=wf.getframerate(),
output=True)
data = wf.readframes(CHUNK)
while data != '':
stream.write(data)
data = wf.readframes(CHUNK)
stream.stop_stream()
stream.close()
p.terminate()
The stereo file I am testing with: https://archive.org/details/TestAifAiffFile
I'm on windows 7, if that's important.
Swapping every other sample does the trick. Load up the whole file into data, then do
a = numpy.fromstring(data, dtype = '<i2')
temp = a[1::2]
a[1::2] = a[::2]
a[::2] = temp
Then play a as though it were a string of audio samples rather than a numpy array. I've tested it on two different aiff files, and in both cases it preserves both channels and plays correctly.
This works probably because the file has an opposite byte order to what pyaudio expects.
I tried your code on Linux, and I also get a horrible noise.
The aifc module seems to read the file correctly, I converted it to a NumPy array and this looks fine:
import numpy as np
data = wf.readframes(wf.getnframes())
sig = np.frombuffer(data, dtype='<i2').reshape(-1, wf.getnchannels())
So I guess the problem is in PyAudio or in your usage of it.
I don't know a solution but I can offer you an alternative: Use soundfile and sounddevice.
import soundfile as sf
import sounddevice as sd
data, fs = sf.read('02DayIsDone.aif')
sd.play(data, fs, blocking=True)
Instead of doing tricky stuffs with the byte chain, you could also consider the audioread package instead of aifc/wave/... It decodes your audio file using whichever backend is available and regardless of the file format, it always returns buffers of 16-bit little-endian signed integer PCM data that you can feed pyaudio with.

how to adjust the volume of the audio file by serially getting the voltage signals from the potentiometer using arduino board and python scripts

I want to adjust the volume of the mp3 file while it is being playing by adjusting the potentiometer. I am reading the potentiometer signal serially via Arduino board with python scripts. With the help of pydub library i can able to read the file but cannot adjust the volume of the file while it is being playing. This is the code i have done after a long search
I specified only the portion of Pydub part. for your information im using vlc media player for changing the volume.
>>> from pydub import AudioSegment
>>> song = AudioSegment.from_wav("C:\Users\RAJU\Desktop\En_Iniya_Ponnilave.wav")
While the file is playing, i cannot adjust the value. Please, someone explain how to do it.
First you need decode your audio signal to raw audio and Split your signal in X frames, and you can manipulate your áudio and at every frame you can change Volume or change the Pitch or change the Speed, etc!
To change the volume you just need multiply your raw audio vector by one factor (this can be your potentiometer data signal).
This factor can be different if your vector are in short int or float point format !
One way to get raw audio data from wav files in python is using wave lib
import wave
spf = wave.open('wavfile.wav','r')
#Extract Raw Audio from Wav File
signal = spf.readframes(-1)
decoded = numpy.fromstring(signal, 'Float32');
Now you can multiply the vector decoded by one factor, for example if you want increase 10dB you need calculate 10^(DbValue/20) then in python 10**(10/20) = 3.1623
newsignal = decoded * 3.1623;
Now you need encode the vector again to play the new framed audio, you can use "from struct import pack" and pyaudio to do it!
stream = pyaud.open(
format = pyaudio.paFloat32,
channels = 1,
rate = 44100,
output = True,
input = True)
EncodeAgain = pack("%df"%(len(newsignal)), *list(newsignal))
And finally Play your framed audio, note that you will do it at every frame and play it in one loop, this process is too fast and the latency can be imperceptibly !
stream.write(EncodeAgain)
PS: This example is for float point format !
Ederwander,As u said I have treid coding but when packing the data, im getting total zero. so it is not streaming. I understand the problem may occur in converting the format data types.This is the code i have written. Please look at it and say the suggestion
import sys
import serial
import time
import os
from pydub import AudioSegment
import wave
from struct import pack
import numpy
import pyaudio
CHUNK = 1024
wf = wave.open('C:\Users\RAJU\Desktop\En_Iniya_Ponnilave.wav', 'rb')
# instantiate PyAudio (1)
p = pyaudio.PyAudio()
# open stream (2)
stream = p.open(format = p.get_format_from_width(wf.getsampwidth()),channels = wf.getnchannels(),rate = wf.getframerate(),output = True)
# read data
data_read = wf.readframes(CHUNK)
decoded = numpy.fromstring(data_read, 'int32', sep = '');
data = decoded*3.123
while(1):
EncodeAgain = struct.pack(h,data)
stream.write(EncodeAgain)

Playing a sound from a wave form stored in an array

I'm currently experimenting with generating sounds in Python, and I'm curious how I can take a n array representing a waveform (with a sample rate of 44100 hz), and play it. I'm looking for pure Python here, rather than relying on a library that supports more than just .wav format.
or use the sounddevice module. Install using pip install sounddevice, but you need this first: sudo apt-get install libportaudio2
absolute basic:
import numpy as np
import sounddevice as sd
sd.play(myarray)
#may need to be normalised like in below example
#myarray must be a numpy array. If not, convert with np.array(myarray)
A few more options:
import numpy as np
import sounddevice as sd
#variables
samplfreq = 100 #the sampling frequency of your data (mine=100Hz, yours=44100)
factor = 10 #incr./decr frequency (speed up / slow down by a factor) (normal speed = 1)
#data
print('..interpolating data')
arr = myarray
#normalise the data to between -1 and 1. If your data wasn't/isn't normalised it will be very noisy when played here
sd.play( arr / np.max(np.abs(arr)), samplfreq*factor)
You should use a library. Writing it all in pure python could be many thousands of lines of code, to interface with the audio hardware!
With a library, e.g. audiere, it will be as simple as this:
import audiere
ds = audiere.open_device()
os = ds.open_array(input_array, 44100)
os.play()
There's also pyglet, pygame, and many others..
Edit: audiere module mentioned above appears no longer maintained, but my advice to rely on a library stays the same. Take your pick of a current project here:
https://wiki.python.org/moin/Audio/
https://pythonbasics.org/python-play-sound/
The reason there's not many high-level stdlib "batteries included" here is because interactions with the audio hardware can be very platform-dependent.
I think you may look this list
http://wiki.python.org/moin/PythonInMusic
It list many useful tools for working with sound.
To play sound given array input_array of 16 bit samples. This is modified example from pyadio documentation page
import pyaudio
# instantiate PyAudio (1)
p = pyaudio.PyAudio()
# open stream (2), 2 is size in bytes of int16
stream = p.open(format=p.get_format_from_width(2),
channels=1,
rate=44100,
output=True)
# play stream (3), blocking call
stream.write(input_array)
# stop stream (4)
stream.stop_stream()
stream.close()
# close PyAudio (5)
p.terminate()
Here's a snippet of code taken from this stackoverflow answer, with an added example to play a numpy array (scipy loaded sound file):
from wave import open as waveOpen
from ossaudiodev import open as ossOpen
from ossaudiodev import AFMT_S16_NE
import numpy as np
from scipy.io import wavfile
# from https://stackoverflow.com/questions/307305/play-a-sound-with-python/311634#311634
# run this: sudo modprobe snd-pcm-oss
s = waveOpen('example.wav','rb')
(nc,sw,fr,nf,comptype, compname) = s.getparams( )
dsp = ossOpen('/dev/dsp','w')
print(nc,sw,fr,nf,comptype, compname)
_, snp = wavfile.read('example.wav')
print(snp)
dsp.setparameters(AFMT_S16_NE, nc, fr)
data = s.readframes(nf)
s.close()
dsp.write(snp.tobytes())
dsp.write(data)
dsp.close()
Basically you can just call the tobytes() method; the returned bytearray then can be played.
P.S. this method is supa fast

Categories