I'm using pyaudio to take input from a microphone or read a wav file, and analyze the stream while playing it. I want to only analyze the right channel if the input is stereo. I've been able to extract the data and convert to integers using loops:
levels = []
length = len(data)
if channels == 1:
for i in range(length//2):
volume = abs(struct.unpack('<h', data[i:i+2])[0])
levels.append(volume)
elif channels == 2:
for i in range(length//4):
j = 4 * i + 2
volume = abs(struct.unpack('<h', data[j:j+2])[0])
levels.append(volume)
I think this working correctly, I know it runs without error on a laptop and Raspberry Pi 3, but it appears to consume too much time to run on a Raspberry Pi Zero when simultaneously streaming the output to a speaker. I figure that eliminating the loop and using numpy may help. I assume I need to use np.ndarray to do this, and the first parameter will be (CHUNK,) where CHUNK is my chunk size for analyzing the audio (I'm using 1024). And the format would be '<h', as in the struct code above, I think. But I'm at a loss as to how to code it correctly for each of the two cases (mono and right channel only for stereo). How do I create the numpy arrays for each of the two cases?
You are here reading 16-bit integers from a binary file. It seems that you are first reading the data into data variable with something like data = f.read(), which is here not visible. Then you do:
for i in range(length//2):
volume = abs(struct.unpack('<h', data[i:i+2])[0])
levels.append(volume)
BTW, that code is wrong, it shoud be abs(struct.unpack('<h', data[2*i:2*i+2])[0]), otherwise you are overlapping bytes from different values.
To do the same with numpy, you should just do this (instead of both f.read()and the whole loop):
data = np.fromfile(f, dtype='<i2')
This is over 100 times faster than the manual thing above in my test on 5 MB of data.
In the second case, you have interleaved left-right-left-right values. Again you can read them all (assuming you have enough memory) and then access only one half:
data = np.fromfile(f, dtype='<i2')
left = data[::2]
right = data[1::2]
This processes everything, even though you need just one half, but it is still much much faster.
EDIT: If the data not coming from a file, np.fromfile can be replaced with np.frombuffer. Then you have this:
channel_data = np.frombuffer(data, dtype='<i2')
if channels == 2:
channel_data = channel_data[1::2]
levels = np.abs(channel_data)
Related
I accidentally forgot to convert some NumPy arrays to bytes objects when using PyAudio, but to my surprise it still played audio, even if it sounded a bit off. I wrote a little test script (see below) for playing 1 second of a 440Hz tone, and it seems like writing a NumPy array directly to a PyAudio Stream cuts that tone short.
Can anyone explain why this happens? I thought a NumPy array was a contiguous sequence of bytes with some header information about its dtype and strides, so I would've predicted that PyAudio played the full second of the tone after some garbled audio from the header, not cut the tone off.
# script segment
import pyaudio
import numpy as np
RATE = 48000
p = pyaudio.PyAudio()
stream = p.open(format = pyaudio.paFloat32, channels = 1, rate = RATE, output = True)
TONE = 440
SECONDS = 1
t = np.arange(0, 2*np.pi*TONE*SECONDS, 2*np.pi*TONE/RATE)
sina = np.sin(t).astype(np.float32)
sinb = sina.tobytes()
# console commands segment
stream.write(sinb) # bytes object plays 1 second of 440Hz tone
stream.write(sina) # still plays 440Hz tone, but noticeably shorter than 1 second
The problem is more subtle than you describe. Your first call is passing a bytes array of size 192,000. The second call is passing a list of float32 values with size 48,000. pyaudio handles both of them, and passes the buffer to portaudio to be played.
However, when you opened pyaudio, you told it you were sending paFloat32 data, which has 4 bytes per sample. The pyaudio write handler takes the length of the array you gave it, and divides by the number of channels times the sample size to determine how many audio samples there are. In your second call, the length of the array is 48,000, which it divides by 4, and thereby tells portaudio "there are 12,000 samples here".
So, everyone understood the format, but were confused about the size. If you change the second call to
stream.write(sina, 48000)
then no one has to guess, and it works perfectly fine.
I have tried using the Pydub library; however, it only allows the reduction or increase of a certain amount of decibels. How would I proceed if I wanted, for example, to reduce the volume of the wav by a certain percent?
This is simple enough to just do with just the tools in the stdlib.
First, you use wave to open the input file and create an output file:
pathout = os.path.splitext(path)[0] + '-quiet.wav'
with wave.open(path, 'rb') as fin, wave.open(pathout, 'wb') as fout:
Now, you have to copy over all the wave params, and hold onto the sample width for later:
fout.setparams(fin.getparams())
sampwidth = fin.getsampwidth()
Then you loop over frames until done:
while True:
frames = bytearray(fin.readframes(1024))
if not frames:
break
You can use audioop to process this data:
frames = audioop.mul(frames, sampwidth, factor)
… but this will only work for 16-bit little-endian signed LPCM wave files (the most common kind, but not the only kind). You could solve that with other functions—most importantly, lin2lin to handle 8-bit unsigned LPCM (the second most common kind). But it's worth understanding how to do it manually:
for i in range(0, len(frames), sampwidth):
if sampwidth == 1:
# 8-bit unsigned
frames[i] = int(round((sample[0] - 128) * factor + 128)
else:
# 16-, 24-, or 32-bit signed
sample = int.from_bytes(frames[i:i+sampwidth], 'little', signed=True)
quiet = round(sample * factor)
frames[i:i+sampwidth] = int(quiet).to_bytes(sampwidth, 'little', signed=True)
audioop.mul only handles the else part—but it does more than I've done here. In particular, it has to handle cases with factors over 1—a naive multiply would clip, which will just add weird distortion without adding the desired max energy. (It's worth reading the pure Python implementation from PyPy if you want to learn the basics of this stuff.)
If you also want to handle float32 files, you need to look at the format, because they have the same sampwidth as int32, and you'll probably want the struct module or the array module to pack and unpack them. If you want to handle even less common formats, like a-law and µ-law, you'll need to read a more detailed format spec. Notice that audioop has tools for handling most of them, like ulaw2lin to convert µ-law to LPCM so you can process it and convert it back—but again, it might be worth learning how to do it manually. And for some of them, like CoolEdit float24/32, you pretty much have to do it manually.
Anyway, once you've got the quieted frames, you just write them out:
fout.writeframes(frames)
You could use the mul function from the built-in audioop module. This is what pydub uses internally, after converting the decibel value to a multiplication factor.
First some background
I am trying to write my own set of tools for video analysis, mainly for detecting render errors like flashing frames and possibly some other stuff in the future.
The (obvious) goal is to write a script, that is faster and more accurate than me watching the file in real time.
Using OpenCV, I have something that looks like this:
import cv2
vid = cv2.VideoCapture("Video/OpenCV_Testfile.mov", cv2.CAP_FFMPEG)
width = 1024
height = 576
length = vid.get(cv2.CAP_PROP_FRAME_COUNT)
for f in range(length):
blue_values = []
vid.set(cv2.CAP_PROP_POS_FRAMES, f)
is_read, frame = vid.read()
if is_read:
for row in range(height):
for col in range(width):
blue_values.append(frame[row][col][0])
print(blue_values)
vid.release()
This just prints out a list of all blue values of every frame.
- Just for simplicity (My actual script compares a few values across each frame and only saves the frame number when all are equal)
Although this works, it is not a very fast operation. (Nested loops, but most important, the read() method has to be called for every frame, which is rather slow.
I tried to use multiprocessing but basically ended up having the same crashes as described here:
how to get frames from video in parallel using cv2 & multiprocessing in python
I have a 20s long 1024x576#25fps Testfile which performs as follows:
mov, ProRes: 15s
mp4, h.264: 30s (too slow)
My machine is capable of playing back h.264 in 1920x1080#50fps with mplayer (which uses ffmpeg to decode). So, I should be able to get more out of this. Which leads me to
my Question
How can I decode a video and simply dump all pixel values into a list for further (possibly multithreaded) operations? Speed is really all that matters. Note: I'm not fixated on OpenCV. Whatever works best.
Thanks!
Background
The binary file contain successive raw output from a camera sensor which is in the form of a bayer pattern. i.e. the data is successive blocks containing information of the form shown below and where each block is a image in image stream
[(bayer width) * (bayer height) * sizeof(short)]
Objective
To read information from a specific block of data and store it as an array for processing. I was digging through the opencv documentation, and totally lost on how to proceed. I apologize for the novice question but any suggestions?
Assuming you can read the binary file (as a whole), I would try to use
Numpy to read it into a numpy.array. You can use numpy.fromstring and depending on the system the file was written on (little or big endian), use >i2 or <i2 as your data type (you can find the list of data types here).
Also note that > means big endian and < means little endian (more on that here)
You can set an offset and specify the length in order to read to read a certain block.
import numpy as np
with open('datafile.bin','r') as f:
dataBytes = f.read()
data = np.fromstring(dataBytes[blockStartIndex:blockEndIndex], dtype='>i2')
In case you cannot read the file as a whole, I would use mmap (requires a little knowledge of C) in order to break it down to multiple files and then use the method above.
OP here, with #lsxliron's suggestion I looked into using Numpy to achieve my goals and this is what I ended up doing
import numpy as np
# Data read length
length = (bayer width) * (bayer height)
# In terms of bytes: short = 2
step = 2 * length
# Open filename
img = open("filename","rb")
# Block we are interested in i
img.seek(i * step)
# Copy data as Numpy array
Bayer = np.fromfime(img,dtype=np.uint16,count=length)
Bayer now holds the bayer pattern values in the form of an numpy array success!!
I'm playing with pyaudio on a mac using a Saffire Pro 40 sound card.
Currently I have two inputs plugged in and I'd like to control the levels of the second input channel programmatically. (This works fine using the sound card's mix control software).
I've been going through the pyaudio docs, but haven't found anything glaring on this issue so far. What's the simplest way to essentially do what the mix control software does (control volume per channel) programmatically? (A Python API would be nice, but not essential)
To simplify: it looks like it's possible to manually read the streams from the channels I want to control, scale them using numpy, them write them as output, but I'm hoping there is a method to simply send a normalized value per channel to control it.
So instead of something like this:
stream1 = pyaudioInstance.open( format = FORMAT,
channels = CHANNELS,
rate = RATE,
input = True,
output = True,
input_device_index = 0,
frames_per_buffer = CHUNK
)
stream2 = pyaudioInstance.open( format = FORMAT,
channels = CHANNELS,
rate = RATE,
input = True,
input_device_index = 1,
frames_per_buffer = CHUNK
)
while processingAudio:
# manually fetch each channel
data1In = stream1.read(CHUNK)
data2In = stream2.read(CHUNK)
# convert to numpy to easy scale the arrays
decodeddata1 = numpy.fromstring(data1In, numpy.int16)
decodeddata2 = numpy.fromstring(data2In, numpy.int16)
newdata = (decodeddata1 * 0.5 + decodeddata2* 0.1).astype(numpy.int16)
# finally write the processed data
stream1.write(result.tostring())
This is a bit misleading but I would need to mix separate channels from the same input device index. However what I'm hoping is something like:
someSoundCardAPI.channels[0].setVolume(0.2)
Having a look at the Channel Maps example feels closer to what I'm after. At the moment I find the host_api_specific part of API a bit confusing and I was hoping someone already has some experience successfully using this.
I am using OSX 10.10
I don't really have any experience with OSX, so I don't know, but normally you can remote-control everything with AppleScript.
See, for example, this question.
It doesn't say how to control the volume of a single channel separately, though.
Probably you should ask there ...
Regarding the inferior work-around, you can use python-sounddevice to create a little (untested) Python script:
import sounddevice as sd
def callback(indata, outdata, *stuff):
outdata[:] = indata * [1, 0.5]
with sd.Stream(channels=2, callback=callback):
input()
This script will run until you press <Return> and it will reduce the volume of the second channel.