Sampling in RasPi3 and ADS1115 using Sample count or time.time() - python

I'm using Raspberry Pi 3 and ADS1115, my project requires me to get evenly spaced samples so as to plot and analyse. The other posts were about achieving 10k and 50k sps but I only require 500SPS and that isn't working either. Is there a way to run my code for 120 seconds with 500 sps and get 60,000 samples in the end from both A0 and A1 channel at the same time? I have attached the code for reference. Thanks in advance
from Adafruit_ADS1x15 import ADS1x15
import time
import numpy as np
pga = 2/3 # Set full-scale range of programmable gain
# amplifier (page 13 of data sheet), change
#depending on the input voltage range
ADS1115 = 0x01 # Specify that the device being used is the
# ADS1115, for the ADS1015 used 0x00
adc = Adafruit_ADS1x15.ADS1015() # Create instance of the class ADS1x15
# Function to print sampled values to the terminal
def logdata():
print "sps value should be one of: 8, 16, 32, 64, 128, 250, 475, 860,
otherwise the value will default to 250"
frequency = input("Input sampling frequency (Hz): ") # Get
#sampling frequency from user
sps = input("Input sps (Hz) : ") # Get
# ads1115 sps value from the user
time1 = input("Input sample time (seconds): ") # Get how
#long to sample for from the user
period = 1.0 / frequency # Calculate sampling period
datapoints = int(time1*frequency) # Datapoints is the total number
#of samples to take, which must be an integer
startTime=time.time() # Time of first sample
t1=startTime # T1 is last sample time
t2=t1 # T2 is current time
for x in range (0,datapoints) : # Loop in which data is sampled
while (t2-t1 < period) : # Check if t2-t1 is less then
#sample period, if it is then update t2
t2=time.time() # and check again
t1+=period # Update last sample time by the
# sampling period
print adc.read_adc(0, pga, data_rate=sps), "mV ", ("%.2f" %
(t2-startTime)) , "s" # Print sampled value and time to the terminal
# Call to logdata function
logdata()

1) Are you using the ADS1115 ???
adc = Adafruit_ADS1x15.ADS1015() # should be adc = Adafruit_ADS1x15.ADS1115()
2) you can't read two or more single-ended channels at the same time. In differential mode one can compare two channels to yield one value.
To read a value from channel 1, besides channel 0 you would have to add another call in your loop:
print adc.read_adc(0, pga, data_rate=sps) ..... # original call for channel 0
print adc.read_adc(1, pga, data_rate=sps) ..... # new call for channel 1
3) before a value can be read, the ADS must be configured with several parameters, like channel, rate, gain etc. After some time, which is needed for doing an analog/digital conversion, values can be read, once, or over and over again, when in continuous mode.
In the original ADAFruit library this time is calculated from the data rate (insecure) and in the recent port to CircuitPython this time will most often be around 0.01 s, because the conversion will most probably not have finished directly after configuration (check the _read method).
4) reading at 500SPS is about the fastest the ADS1115 can read. The reference states that conversion at 860SPS takes 1.2 ms. Adding time for configuration and reading you will not be able to read two or more values continuosly every 0.002s, even if you were receiving notifications for conversion, like I described on my homepage, instead of waiting for a fixed period of time.
5) I think the closest you can get with Python is to run 2 daisy-chained ADS1115 in continuous mode with GPIO notifications, but I have no experience with that.

Related

InfluxDB Python Timing each write

So I have created a script to upload a list of 1 million points in batches of 4000 points every second so it should take 250 batches to upload the data.
I have placed 2 timing functions in the script:
One around each write_api.write() to calculate the time it takes for each batch
One around the outer loop to calculate the the time it takes to upload the whole 1 million points
However each individual timing function says it takes on average 1 second to upload the batch, so in my opinion it should take 250 seconds to upload, however the actual writing time from the outer loop to calculate the total time says 400 seconds which is almost double the time taken even if I sum up the individual 250 write times.
import random
import copy
import datetime
from influxdb_client import InfluxDBClient, Point, WriteOptions
from influxdb_client.client.exceptions import InfluxDBError
import time
class BatchingCallback(object):
def success(self, conf: (str, str, str), data: str):
print(f"Written batch: {conf}, data: {data}")
def error(self, conf: (str, str, str), data: str, exception: InfluxDBError):
print(f"Cannot write batch: {conf}, data: {data} due: {exception}")
def retry(self, conf: (str, str, str), data: str, exception: InfluxDBError):
print(f"Retryable error occurs for batch: {conf}, data: {data} retry: {exception}")
def write_data(url, token, bucket, org):
callback = BatchingCallback()
data = generate_list_dictionary() # Generating the data which contains a list of 1 mil points with 100 fields each
total_start = time.perf_counter()
point_per_batch = 4000
with InfluxDBClient(url=url, token=token, org=org) as client:
with client.write_api(write_options=WriteOptions(batch_size=points_per_batch),
success_callback=callback.success,
error_callback=callback.error,
retry_callback=callback.retry) as write_api:
time_start = datetime.datetime.now()
for data_point in range(0, len(data), points_per_batch):
upper_index = data_point + points_per_batch #Allows us to slice the data in batches of 4000
seconds = datetime.delta(seconds=int(upper_index/points_per_batch)) # Allows us to send the data every second
while(True):
if(datetime.datetime.now() >= time_start + seconds):
# Writing the data in batches and performing individual timings
start = time.perf_counter()
write_api.write(bucket=bucket, org=org, record=data[data_points:upper_index])
end = time.perf_counter()
print(f"Individual time is: {end-start}")
break
total_end = time.perf_counter
print(f"Total time is: {total_end-total_start}")
if __name__ == '__main__':
url = ""
token = ""
bucket = ""
org = ""
write_data(url, token, bucket, org)
As you can see the Total time gets printed as 450 seconds on average, while the individual time is 1 second on average however by this logic it should take 250 seconds to upload but it's almost double.
So my question is, is this individual time that Python is calculating wrong? If so how do I calculate the time taken to upload every single write() I do.
The client version I'm using is 1.31.0 and my Python version is 3.7.7 and I'm on Windows

How to properly read most recent value from serial?

I am reading values from a pressure sensing mat which has 32x32 individual pressure points. It outputs the readings on serial as 1024 bytes between 1 and 250 + 1 'end token' byte which is always 255 (or xFF).
I thought the function bellow would flush/reset the input buffer and then take a 'fresh' reading and return the max pressure value from that reading whenever I call it.
However, none of the ser.reset_input_buffer() and similar methods seem to actually empty the buffer. When I press down on the mat, run the program and immediately release the pressure, I don't see the max value drop immediately. Instead, it seems to be going through the buffer one by one.
import serial
import numpy as np
import time
def read_serial():
ser_bytes = bytearray([0])
# none of these seem to make a differece
ser.reset_input_buffer()
ser.flushInput()
ser.flush()
# 2050 bytes should always contain a whole chunk of 1025 bytes ending with 255 (xFF)
while len(ser_bytes) <= 2050:
ser_bytes = ser_bytes + ser.read_until(b'\xFF')
ser_ints = np.array(ser_bytes, dtype='int32') #bytes to ints
last_end_byte_index = np.max( np.where(ser_ints == 255) ) #find the last end byte
# get only the 1024 sensor readings as 32x32 np array
mat_reading = np.array( ser_ints[last_end_byte_index-1024: last_end_byte_index]).reshape(32,32)
return np.amax(mat_reading)
ser = serial.Serial('/dev/tty.usbmodem14201', 115200, timeout=1)
while True:
print(read_serial())
time.sleep(1)
The best solution I found so far is having a designated thread which keeps reading the buffer and updating a global variable. It works but seems a bit unresourceful if I only want to read the value about every 60 seconds. Is there a better way?
Also... is there a better way to read the 1025-byte chunk representing the entire mat? There are no line breaks, so ser.readline() won't work.
Thanks! Not sure how to make an MWE with serial, sorry about that ;)

How to get accurate timing using microphone in python

I'm trying to make beat detection using PC microphone and then with timestamp of beat calculate distance between multiple successive beats. I have chosen python because there is plenty of material available and it's quick to develop. By searching the internet I have come up with this simple code (no advanced peak detection or anything yet, this comes later if need be):
import pyaudio
import struct
import math
import time
SHORT_NORMALIZE = (1.0/32768.0)
def get_rms(block):
# RMS amplitude is defined as the square root of the
# mean over time of the square of the amplitude.
# so we need to convert this string of bytes into
# a string of 16-bit samples...
# we will get one short out for each
# two chars in the string.
count = len(block)/2
format = "%dh" % (count)
shorts = struct.unpack(format, block)
# iterate over the block.
sum_squares = 0.0
for sample in shorts:
# sample is a signed short in +/- 32768.
# normalize it to 1.0
n = sample * SHORT_NORMALIZE
sum_squares += n*n
return math.sqrt(sum_squares / count)
CHUNK = 32
FORMAT = pyaudio.paInt16
CHANNELS = 1
RATE = 44100
p = pyaudio.PyAudio()
stream = p.open(format=FORMAT,
channels=CHANNELS,
rate=RATE,
input=True,
frames_per_buffer=CHUNK)
elapsed_time = 0
prev_detect_time = 0
while True:
data = stream.read(CHUNK)
amplitude = get_rms(data)
if amplitude > 0.05: # value set by observing graphed data captured from mic
elapsed_time = time.perf_counter() - prev_detect_time
if elapsed_time > 0.1: # guard against multiple spikes at beat point
print(elapsed_time)
prev_detect_time = time.perf_counter()
def close_stream():
stream.stop_stream()
stream.close()
p.terminate()
The code works pretty good in silence, and I have been pretty satisfied the first two moments I ran it, but then I tried how accurate it was and I was a little bit less satisfied. To test this I used two methods: phone with metronome set to 60bpm (emits tic toc sounds into microphone) and an Arduino hooked to a beeper, which is triggered at 1Hz rate by accurate Chronodot RTC. The beeper beeps into microphone, triggering a detection. With both methods results look similar (numbers represent distance between two beat detections in seconds):
0.9956681643835616
1.0056331689497717
0.9956100091324198
1.0058207853881278
0.9953449497716891
1.0052103013698623
1.0049350136986295
0.9859074337899543
1.004996383561644
0.9954095342465745
1.0061518904109583
0.9953025753424658
1.0051235068493156
1.0057199634703196
0.984839305936072
1.00610396347032
0.9951862648401821
1.0053146301369864
0.9960100821917806
1.0053391780821919
0.9947373881278523
1.0058608219178105
1.0056580091324214
0.9852110319634697
1.0054473059360731
0.9950465753424638
1.0058237077625556
0.995704694063928
1.0054566575342463
0.9851026118721435
1.0059882374429243
1.0052523835616398
0.9956161461187207
1.0050863926940607
0.9955758173515932
1.0058052968036577
0.9953960913242028
1.0048014611872205
1.006336876712325
0.9847434520547935
1.0059712876712297
Now I'm pretty confident that at least Arduino is accurate to 1 msec (which is targeted accuracy). The results tend to be off by +- 5msec, but now and then even 15ms, which is unacceptable. Is there a way to achieve greater accuracy or is this limitation of python / soundcard / something else? Thank you!
EDIT:
After incorporating tom10 and barny's suggestions into the code, the code looks like this:
import pyaudio
import struct
import math
import psutil
import os
def set_high_priority():
p = psutil.Process(os.getpid())
p.nice(psutil.HIGH_PRIORITY_CLASS)
SHORT_NORMALIZE = (1.0/32768.0)
def get_rms(block):
# RMS amplitude is defined as the square root of the
# mean over time of the square of the amplitude.
# so we need to convert this string of bytes into
# a string of 16-bit samples...
# we will get one short out for each
# two chars in the string.
count = len(block)/2
format = "%dh" % (count)
shorts = struct.unpack(format, block)
# iterate over the block.
sum_squares = 0.0
for sample in shorts:
# sample is a signed short in +/- 32768.
# normalize it to 1.0
n = sample * SHORT_NORMALIZE
sum_squares += n*n
return math.sqrt(sum_squares / count)
CHUNK = 4096
FORMAT = pyaudio.paInt16
CHANNELS = 1
RATE = 44100
RUNTIME_SECONDS = 10
set_high_priority()
p = pyaudio.PyAudio()
stream = p.open(format=FORMAT,
channels=CHANNELS,
rate=RATE,
input=True,
frames_per_buffer=CHUNK)
elapsed_time = 0
prev_detect_time = 0
TIME_PER_CHUNK = 1000 / RATE * CHUNK
SAMPLE_GROUP_SIZE = 32 # 1 sample = 2 bytes, group is closest to 1 msec elapsing
TIME_PER_GROUP = 1000 / RATE * SAMPLE_GROUP_SIZE
for i in range(0, int(RATE / CHUNK * RUNTIME_SECONDS)):
data = stream.read(CHUNK)
time_in_chunk = 0
group_index = 0
for j in range(0, len(data), (SAMPLE_GROUP_SIZE * 2)):
group = data[j:(j + (SAMPLE_GROUP_SIZE * 2))]
amplitude = get_rms(group)
amplitudes.append(amplitude)
if amplitude > 0.02:
current_time = (elapsed_time + time_in_chunk)
time_since_last_beat = current_time - prev_detect_time
if time_since_last_beat > 500:
print(time_since_last_beat)
prev_detect_time = current_time
time_in_chunk = (group_index+1) * TIME_PER_GROUP
group_index += 1
elapsed_time = (i+1) * TIME_PER_CHUNK
stream.stop_stream()
stream.close()
p.terminate()
With this code I achieved the following results (units are this time milliseconds instead of seconds):
999.909297052154
999.9092970521542
999.9092970521542
999.9092970521542
999.9092970521542
1000.6349206349205
999.9092970521551
999.9092970521524
999.9092970521542
999.909297052156
999.9092970521542
999.9092970521542
999.9092970521524
999.9092970521542
Which, if I didn't make any mistake, looks a lot better than before and has achieved sub-millisecond accuracy. I thank tom10 and barny for their help.
The reason you're not getting the right timing for the beats is that you're missing chunks of the audio data. That is, the chunks are being read by the soundcard, but you're not collecting the data before it's overwritten with the next chunk.
First, though, for this problem you need to distinguish between the ideas of timing accuracy and real-time response.
The timing accuracy of a sound card should be very good, much better than a ms, and you should be able to capture all of this accuracy in the data you read from the soundcard. The real-time responsiveness of your computer's OS should be very bad, much worse than a ms. That is, you should easily be able to identify audio events (such as beats) to within a ms, but not identify them at the time they happen (instead, 30-200ms later depending on your system). This arrangement usually works for computers because general human perception of the timing of events is much greater than a ms (except for rare specialized percepetual systems, like comparing auditory events between the two ears, etc).
The specific problem with your code is that CHUNKS is much too small for the OS to query the sound card at each sample. You have it at 32, so at 44100Hz, the OS needs to get to the sound card every 0.7ms, which is too short of a time for a computer that's tasked with doing many other things. If you OS doesn't get the chunk before the next one comes in, the original chunk is overwritten and lost.
To get this working so it's consistent with the constraints above, make CHUNKS much larger than 32, and more like 1024 (as in the PyAudio examples). Depending on your computer and what it's doing, even that my not be long enough.
If this type of approach won't work for you, you will probably need a dedicated real-time system like an Arduino. (Generally, though, this isn't necessary, so think twice before you decide that you need to use the Arduino. Usually, when I've seen people need true real-time it's when trying to do something very quantitave interactive with the human, like flash a light, have the person tap a button, flash another light, have the person tap another button, etc, to measure response times.)

Python delays on Raspberry Pi

I'm trying to simulate a compound action potential for calibrating research instruments. The goal is to output a certain 10 µV signal at 250 Hz. The low voltage will be dealt with later, the main problem for me is the frequency. The picture below shows an overview of the system I'm trying to make.
By data acquisition from a live animal, and processing the data in MATLAB, I've made a low noise signal, with 789 values in 12-bit format. I then cloned the repository where I stored this in csv-format to a Raspberry Pi using Git. Below is the Python script I've written on the RPi. You can skip to def main in the script to see functionality.
#!/usr/bin/python
import spidev
from time import sleep
import RPi.GPIO as GPIO
import csv
import sys
import math
DEBUG = False
spi_max_speed = 20 * 1000000
V_Ref = 5000
Resolution = 2**12
CE = 0
spi = spidev.SpiDev()
spi.open(0,CE)
spi.max_speed_hz = spi_max_speed
LDAQ = 22
GPIO.setmode(GPIO.BOARD)
GPIO.setup(LDAQ, GPIO.OUT)
GPIO.output(LDAQ,GPIO.LOW)
def setOutput(val):
lowByte = val & 0b11111111 #Make bytes using MCP4921 data sheet info
highByte = ((val >> 8) & 0xff) | 0b0 << 7 | 0b0 << 6 | 0b1 << 5 | 0b1 << 4
if DEBUG :
print("Highbyte = {0:8b}".format(highByte))
print("Lowbyte = {0:8b}".format(lowByte))
spi.xfer2([highByte, lowByte])
def main():
with open('signal12bit.csv') as signal:
signal_length = float(raw_input("Please input signal length in ms: "))
delay = float(raw_input("Please input delay after signal in ms: "))
amplitude = float(raw_input("Please input signal amplitude in mV: "))
print "Starting Simulant with signal length %.1f ms, delay %.1f ms and amplitude %.1f mV." % (signal_length, delay, amplitude)
if not DEBUG : print "Press ctrl+c to close."
sleep (1) #Wait a sec before starting
read = csv.reader(signal, delimiter=' ', quotechar='|')
try:
while(True):
signal.seek(0)
for row in read: #Loop csv file rows
if DEBUG : print ', '.join(row)
setOutput(int(row)/int((V_Ref/amplitude))) #Adjust amplitude, not super necessary to do in software
sleep (signal_length/(data_points*1000) #Divide by 1000 to make into ms, divide by length of data
sleep (delay/1000)
except (KeyboardInterrupt, Exception) as e:
print(e)
print "Closing SPI channel"
setOutput(0)
GPIO.cleanup()
spi.close()
if __name__ == '__main__':
main()
This script almost works as intended. Connecting the output pin of an MCP4921 DAC to an oscilloscope shows that it reproduces the signal very well, and it outputs the subsequent delay correctly.
Unfortunately, the data points are seperated much further than I need them to be. The shortest time I can cram the signal into is about 79 ms. This is due to dividing by 789000 in the sleep function, which I know is too much to ask from Python and from the Pi, because reading the csv file takes time. However, if I try making an array manually, and putting those values out instead of reading the csv file, I can achieve a frequency over 6 kHz with no loss.
My question is this
How can I get this signal to appear at a frequency of 250 Hz, and decrease it reliably from the user's input? I've thought about manually writing the 789 values into an array in the script, and then changing the SPI speed to whatever value fits with 250 Hz. This would eliminate the slow csv reader function, but then you can't reduce the frequency from user input. In any case, eliminating the need for csv.read would help a lot. Thanks!
Figured it out earlier today, so I thought I'd post an answer here, in case someone comes upon a similar problem in the future.
The problem with the internal delay between data points cannot be solved with sleep(), for several reasons. What I ended up doing was the following
Move all math and function calling out of the critical loop
Do a linear regression analysis on the time it takes to transfer the values with no delay
Increase the number of datapoints in the CSV file to "plenty" (9600) in MATLAB
Calculate the number of points needed to meet the user's wanted signal length
Take evenly seperated entries from the now bigger CSV file to fit that number of points as closely as possible.
Calculate these values and then calculate the SPI bytes explicitly
Save the two byte lists, and output them directly in the critical loop
The new code, with a bit of input checking, is below
#!/usr/bin/python
import spidev
from time import sleep
import RPi.GPIO as GPIO
import sys
import csv
import ast
spi_max_speed = 16 * 1000000 # 16 MHz
V_Ref = 5000 # 5V in mV
Resolution = 2**12 # 12 bits for the MCP 4921
CE = 0 # CE0 or CE1, select SPI device on bus
total_data_points = 9600 #CSV file length
spi = spidev.SpiDev()
spi.open(0,CE)
spi.max_speed_hz = spi_max_speed
LDAQ=22
GPIO.setmode(GPIO.BOARD)
GPIO.setup(LDAQ, GPIO.OUT)
GPIO.output(LDAQ,GPIO.LOW)
def main():
#User inputs and checking for digits
signalLengthU = raw_input("Input signal length in ms, minimum 4: ")
if signalLengthU.isdigit():
signalLength = signalLengthU
else:
signalLength = 4
delayU = raw_input("Input delay after signal in ms: ")
if delayU.isdigit():
delay = delayU
else:
delay = 0
amplitudeU = raw_input("Input signal amplitude in mV, between 1 and 5000: ")
if amplitudeU.isdigit():
amplitude = amplitudeU
else:
amplitude = 5000
#Calculate data points, delay, and amplitude
data_points = int((1000*float(signalLength)-24.6418)/12.3291)
signalDelay = float(delay)/1000
setAmplitude = V_Ref/float(amplitude)
#Load and save CSV file
datain = open('signal12bit.csv')
read = csv.reader(datain, delimiter=' ', quotechar='|')
signal = []
for row in read:
signal.append(ast.literal_eval(row[0]))
#Downsampling to achieve desired signal length
downsampling = int(round(total_data_points/data_points))
signalSpeed = signal[0::downsampling]
listlen = len(signalSpeed)
#Construction of SPI bytes, to avoid calling functions in critical loop
lowByte = []
highByte = []
for i in signalSpeed:
lowByte.append(int(i/setAmplitude) & 0b11111111)
highByte.append(((int(i/setAmplitude) >> 8) & 0xff) | 0b0 << 7 | 0b0 << 6 | 0b1 << 5 | 0b1 << 4)
print "Starting Simulant with signal length %s ms, delay %s ms and amplitude %s mV." % (signalLength, delay, amplitude)
print "Press ctrl+c to stop."
sleep (1)
try:
while(True): #Main loop
for i in range(listlen):
spi.xfer2([highByte[i],lowByte[i]]) #Critical loop, no delay!
sleep (signalDelay)
except (KeyboardInterrupt, Exception) as e:
print e
print "Closing SPI channel"
lowByte = 0 & 0b11111111
highByte = ((0 >> 8) & 0xff) | 0b0 << 7 | 0b0 << 6 | 0b1 << 5 | 0b1 << 4
spi.xfer2([highByte, lowByte])
GPIO.cleanup()
spi.close()
if __name__ == '__main__':
main()
The result is exactly what I wanted. Below is seen an example from the oscilloscope with a signal length of 5 ms; 200 Hz. Thanks for your help, guys!

Implement realtime signal processing in Python - how to capture audio continuously?

I'm planning to implement a "DSP-like" signal processor in Python. It should capture small fragments of audio via ALSA, process them, then play them back via ALSA.
To get things started, I wrote the following (very simple) code.
import alsaaudio
inp = alsaaudio.PCM(alsaaudio.PCM_CAPTURE, alsaaudio.PCM_NORMAL)
inp.setchannels(1)
inp.setrate(96000)
inp.setformat(alsaaudio.PCM_FORMAT_U32_LE)
inp.setperiodsize(1920)
outp = alsaaudio.PCM(alsaaudio.PCM_PLAYBACK, alsaaudio.PCM_NORMAL)
outp.setchannels(1)
outp.setrate(96000)
outp.setformat(alsaaudio.PCM_FORMAT_U32_LE)
outp.setperiodsize(1920)
while True:
l, data = inp.read()
# TODO: Perform some processing.
outp.write(data)
The problem is, that the audio "stutters" and is not gapless. I tried experimenting with the PCM mode, setting it to either PCM_ASYNC or PCM_NONBLOCK, but the problem remains. I think the problem is that samples "between" two subsequent calls to "inp.read()" are lost.
Is there a way to capture audio "continuously" in Python (preferably without the need for too "specific"/"non-standard" libraries)? I'd like the signal to always get captured "in the background" into some buffer, from which I can read some "momentary state", while audio is further being captured into the buffer even during the time, when I perform my read operations. How can I achieve this?
Even if I use a dedicated process/thread to capture the audio, this process/thread will always at least have to (1) read audio from the source, (2) then put it into some buffer (from which the "signal processing" process/thread then reads). These two operations will therefore still be sequential in time and thus samples will get lost. How do I avoid this?
Thanks a lot for your advice!
EDIT 2: Now I have it running.
import alsaaudio
from multiprocessing import Process, Queue
import numpy as np
import struct
"""
A class implementing buffered audio I/O.
"""
class Audio:
"""
Initialize the audio buffer.
"""
def __init__(self):
#self.__rate = 96000
self.__rate = 8000
self.__stride = 4
self.__pre_post = 4
self.__read_queue = Queue()
self.__write_queue = Queue()
"""
Reads audio from an ALSA audio device into the read queue.
Supposed to run in its own process.
"""
def __read(self):
inp = alsaaudio.PCM(alsaaudio.PCM_CAPTURE, alsaaudio.PCM_NORMAL)
inp.setchannels(1)
inp.setrate(self.__rate)
inp.setformat(alsaaudio.PCM_FORMAT_U32_BE)
inp.setperiodsize(self.__rate / 50)
while True:
_, data = inp.read()
self.__read_queue.put(data)
"""
Writes audio to an ALSA audio device from the write queue.
Supposed to run in its own process.
"""
def __write(self):
outp = alsaaudio.PCM(alsaaudio.PCM_PLAYBACK, alsaaudio.PCM_NORMAL)
outp.setchannels(1)
outp.setrate(self.__rate)
outp.setformat(alsaaudio.PCM_FORMAT_U32_BE)
outp.setperiodsize(self.__rate / 50)
while True:
data = self.__write_queue.get()
outp.write(data)
"""
Pre-post data into the output buffer to avoid buffer underrun.
"""
def __pre_post_data(self):
zeros = np.zeros(self.__rate / 50, dtype = np.uint32)
for i in range(0, self.__pre_post):
self.__write_queue.put(zeros)
"""
Runs the read and write processes.
"""
def run(self):
self.__pre_post_data()
read_process = Process(target = self.__read)
write_process = Process(target = self.__write)
read_process.start()
write_process.start()
"""
Reads audio samples from the queue captured from the reading thread.
"""
def read(self):
return self.__read_queue.get()
"""
Writes audio samples to the queue to be played by the writing thread.
"""
def write(self, data):
self.__write_queue.put(data)
"""
Pseudonymize the audio samples from a binary string into an array of integers.
"""
def pseudonymize(self, s):
return struct.unpack(">" + ("I" * (len(s) / self.__stride)), s)
"""
Depseudonymize the audio samples from an array of integers into a binary string.
"""
def depseudonymize(self, a):
s = ""
for elem in a:
s += struct.pack(">I", elem)
return s
"""
Normalize the audio samples from an array of integers into an array of floats with unity level.
"""
def normalize(self, data, max_val):
data = np.array(data)
bias = int(0.5 * max_val)
fac = 1.0 / (0.5 * max_val)
data = fac * (data - bias)
return data
"""
Denormalize the data from an array of floats with unity level into an array of integers.
"""
def denormalize(self, data, max_val):
bias = int(0.5 * max_val)
fac = 0.5 * max_val
data = np.array(data)
data = (fac * data).astype(np.int64) + bias
return data
debug = True
audio = Audio()
audio.run()
while True:
data = audio.read()
pdata = audio.pseudonymize(data)
if debug:
print "[PRE-PSEUDONYMIZED] Min: " + str(np.min(pdata)) + ", Max: " + str(np.max(pdata))
ndata = audio.normalize(pdata, 0xffffffff)
if debug:
print "[PRE-NORMALIZED] Min: " + str(np.min(ndata)) + ", Max: " + str(np.max(ndata))
print "[PRE-NORMALIZED] Level: " + str(int(10.0 * np.log10(np.max(np.absolute(ndata)))))
#ndata += 0.01 # When I comment in this line, it wreaks complete havoc!
if debug:
print "[POST-NORMALIZED] Level: " + str(int(10.0 * np.log10(np.max(np.absolute(ndata)))))
print "[POST-NORMALIZED] Min: " + str(np.min(ndata)) + ", Max: " + str(np.max(ndata))
pdata = audio.denormalize(ndata, 0xffffffff)
if debug:
print "[POST-PSEUDONYMIZED] Min: " + str(np.min(pdata)) + ", Max: " + str(np.max(pdata))
print ""
data = audio.depseudonymize(pdata)
audio.write(data)
However, when I even perform the slightest modification to the audio data (e. g. comment that line in), I get a lot of noise and extreme distortion at the output. It seems like I don't handle the PCM data correctly. The strange thing is that the output of the "level meter", etc. all appears to make sense. However, the output is completely distorted (but continuous) when I offset it just slightly.
EDIT 3: I just found out that my algorithms (not included here) work when I apply them to wave files. So the problem really appears to actually boil down to the ALSA API.
EDIT 4: I finally found the problems. They were the following.
1st - ALSA quietly "fell back" to PCM_FORMAT_U8_LE upon requesting PCM_FORMAT_U32_LE, thus I interpreted the data incorrectly by assuming that each sample was 4 bytes wide. It works when I request PCM_FORMAT_S32_LE.
2nd - The ALSA output seems to expect period size in bytes, even though they explicitely state that it is expected in frames in the specification. So you have to set the period size four times as high for output if you use 32 bit sample depth.
3rd - Even in Python (where there is a "global interpreter lock"), processes are slow compared to Threads. You can get latency down a lot by changing to threads, since the I/O threads basically don't do anything that's computationally intensive.
When you
read one chunk of data,
write one chunk of data,
then wait for the second chunk of data to be read,
then the buffer of the output device will become empty if the second chunk is not shorter than the first chunk.
You should fill up the output device's buffer with silence before starting the actual processing. Then small delays in either the input or output processing will not matter.
You can do that all manually, as #CL recommend in his/her answer, but I'd recommend just using
GNU Radio instead:
It's a framework that takes care of doing all the "getting small chunks of samples in and out your algorithm"; it scales very well, and you can write your signal processing either in Python or C++.
In fact, it comes with an Audio Source and an Audio Sink that directly talk to ALSA and just give/take continuous samples. I'd recommend reading through GNU Radio's Guided Tutorials; they explain exactly what is necessary to do your signal processing for an audio application.
A really minimal flow graph would look like:
You can substitute the high pass filter for your own signal processing block, or use any combination of the existing blocks.
There's helpful things like file and wav file sinks and sources, filters, resamplers, amplifiers (ok, multipliers), …
I finally found the problems. They were the following.
1st - ALSA quietly "fell back" to PCM_FORMAT_U8_LE upon requesting PCM_FORMAT_U32_LE, thus I interpreted the data incorrectly by assuming that each sample was 4 bytes wide. It works when I request PCM_FORMAT_S32_LE.
2nd - The ALSA output seems to expect period size in bytes, even though they explicitely state that it is expected in frames in the specification. So you have to set the period size four times as high for output if you use 32 bit sample depth.
3rd - Even in Python (where there is a "global interpreter lock"), processes are slow compared to Threads. You can get latency down a lot by changing to threads, since the I/O threads basically don't do anything that's computationally intensive.
Audio is gapless and undistorted now, but latency is far too high.

Categories