Using A-Weighting on time signal - python

Im trying to solve this for a couple of weeks now but it seems like Im not able to wrap my head around this. The task is pretty simple: Im getting a signal in voltage from a microfone and in the end I want to know how loud in dB(A) it is out there.
There are so many problems I dont even know where to start. Lets begin with my idea.
Im converting the voltsignal into a signal in pascal [Pa].
Using a FFT on that signal so I know which frequencies im dealing with.
Then somehow I should implement the A-Weighting on that, but since im handling my values in [Pa] I cant just multiply or add my A-Weighning.
Going with an iFFT and getting back to my timesignal.
Going from Pa to dB.
Calculate RMS and Im done. (Hopefully)
The main problem is the A-Weighting. I realy dont get the idea how I can implement it on a live signal? And since the FFT leads to complex values Im also a little confused by that.
Maybe you get the idea/problem/workflow and help me to at least getting a little bit closer to the goal.
A little disclaimer, I am 100% new to the world of acoustics so please make sure to explain it like you would explain it a little child :D and Im programming with python.
Thanks in advance for your time!

To give you a short answer. This task can be done in only a few steps, utilizing the waveform_analysis package and Parseval's theorem.
The most simple implementation I can come up with is:
Time domain A-weighting filtering the signal - Using this library -
import waveform_analysis
weighted_signal = waveform_analysis.A_weight(signal, fs)
Take the RMS of the signal (utilizing that the power of the time domain equals the power of the frequency domain - Parseval's theorem). -
import numpy as np
rms_value = np.sqrt(np.mean(np.abs(weighted_signal)**2))
Convert this amplitude to dB -
result = 20 * np.log10(rms_value)
This gives you the results in dB(A)FS, if you run these three snippets together.
To get the dB(A)Pa value, you need to know what 0 dBPa corresponds to in dBFS. This is usually done by having a calibrated source such as https://www.grasacoustics.com/products/calibration-equipment/product/756-42ag
One flaw of this implementation is not pre-windowing the time signal. This is, on the other hand, not an issue for sufficently long signals.

Hello Christian Fichtelberg and welcome to StackOveflow. I believe your question could be answered more easily in DSP StackExchange but I will try to provide some quick and dirty answer.
In order to avoid taking the signal to the frequency domain, do the multiplication there (I urge you to the fact that convolution in the time domain - where your signal "resides" - is equivalent to multiplication in the frequency domain. If unfamiliar with this please have a quick look in Wikipedia's convolution page) you can implement the A-Weighting filter in the time-domain and perform some kind of convolution there.
I won't go into the details of the possible pros and cons of each method (time-domain convolution vs frequency domain multiplication). You could have a search on DSP SE or look into some textbook on DSP (such as Oppenheim's Digital Signal Processing, or an equivalent book by Proakis and Manolakis).
According to IEC 61672-1:2013 the digital filter should be "translated" from the analogue filter (a good way to do so is to use the bilinear transform). The proposed filter is a quite "simple" IIR (Infinite Impulse Response) filter.
I will skip the implementation here as it has been provided by others. Please find a MATLAB implementation, a Python implementation (most probably what you are seeking for your application), a quite "high-level" answer on DSP SE with some links and information on designing filters for arbitrary sample rates on DSP SE.
Finally, I would like to mention that if you manage to create a ("smooth enough") polynomial approximation to the curve of the A-Weighting filter you could possibly perform a frequency domain multiplication of the frequency response and the polynomial to change the magnitude only of the spectrum and then perform an iFFT to go back to time domain. This should most probably provide an approximation to the A-Weighted signal. Please note here that this is NOT the correct way to do filtering so treat it with caution (if you decide to try it at all) and only as a quick solution to perform some checks.

Related

Head related impulse response for binaural audio

I am working with audio digital signal processing and binaural audio processing.
I am still learning the basics.
Right now, the idea is to do deconvolution and get an impulse response.
Please see the attached screenshot
Detailed description of what is happening:
Here, an exponential sweep signal is taken and played back back through loudspeaker. The playback is recorded using microphone. The recorded signal is extended using zero padding(probably double the original length) and the original exponential sweep signal is also extended as well. FFTs are taken for both (extended recorded and the extended original), their FFT's are divided and we get room transfer function. Finally,Inverse FFT is taken and some windowing is performed to get Impulse response.
My question:
I am having difficulty implementing this diagram in python. How would you divide two FFT's? Is it possible? I can probably do all steps like zero padding and fft's, but I guess I am not going the correct way. I do not understand the windowing and discarding second half option.
Please can anyone with his/her knowledge show me how would I implement this in python with sweep signal? Just a small example would also help to get an idea with few plots. Please help.
Source of this image: http://www.four-audio.com/data/MF/aes-swp-english.pdf
Thanks in advance,
Sanket Jain
Yes, deviding two FFT-spectra is possible and actually quite easy to implement in python (but with some caveats).
Simply said: As convolution of two time signal corresponds to multiplying their spectra, vice versa the deconvolution can be realized by dividing the spectra.
Here is an example for a simple deconvolution with numpy:
(x is your excitation sweep signal and y is the recorded sweep signal, from which you want to obtain the impulse response.)
import numpy as np
from numpy.fft import rfft, irfft
# define length of FFT (zero padding): at least double length of input
input_length = np.size(x)
n = np.ceil(np.log2(input_length)) + 1
N_fft = int(pow(2, n))
# transform
# real fft: N real input -> N/2+1 complex output (single sided spectrum)
# real ifft: N/2+1 complex input -> N real output
X_f = rfft(x, N_fft)
Y_f = rfft(x, N_fft)
# deconvolve
H = Y_f / X_f
# backward transform
h = irfft(H, N_fft)
# truncate to original length
h = h[:input_length]
This simple solution is a practical one but can (and should be) be improved. A problem is that you will get a boost of the noise floor at those frequencies where X_f has a low amplitude. For example if your exponential sine sweep starts at 100Hz, for the frequency bins below that frequency, you get a division of (almost) zero. One simple possible solution to that is to first invert X_f, apply a bandlimit filter (highpass+lowpass) to remove the "boost areas" and then multiply it with Y_f:
# deconvolve
Xinv_f = 1 / X_f
Xinv_f = Xinv_f * bandlimit_filter
H = Y_f * Xinv_f
Regarding the distortion:
A nice property of the exponential sine sweep is that harmonic distortion production during the measurement (e.g. by nonlinearities in the loudpspeaker) will produce smaller "side" responses before the "main" response after deconvolution (see this for more details). These side responses are the distortion products and can be simply removed by a time window. If there is no delay of the "main" response (starts at t=0), those side responses will appear at the end of the whole iFFT, so you remove them by windowing out the second half.
I cannot guarantee that this is 100% correct from a signal-theory point of view, but I think it shows the point and it works ;)
This is a little over my head, but maybe the following bits of advice can help.
First, I came across a very helpful amount of sample code presented in Steve Smith's book The Scientist and Engineer's Guide to Digital Signal Processing. This includes a range operations, from basics of convolution to the FFT algorithm itself. The sample code is in BASIC, not Python. But the BASIC is perfectly readable, and should be easy to translate.
I'm not entirely sure about the specific calculation you describe, but many operations in this realm (when dealing with multiple signals) turn out to simply employ addition or subtraction of constituent elements. To get an authoritative answer, I think you will have better luck at Stack Overflow's Signal Processing forum or at one of the forums at DSP Related.
If you do get an answer elsewhere, it might be good to either recap it here or delete this question entirely to reduce clutter.

Rising and Falling Edge in multiple signals - PYTHON

This is the global scenario: I'm recording some simple signals from a novel sensor using Python 3.8. I have already filtered signals to have a better representations where let run other algorithms of Data Analysis. Nothing of special.
Following some signals on which I need to run my algorithm:
First Example
Second Example
These signals came out a sensor whose I am working on. My aim is to get the timestamps where signals starting to increase or decrease. I actually need to run this algorithm for only one signal (blue or orange).
I have reported both signals because they have antagonistic behaviour and maybe could be useful to achieve my task.
In other words, these signals are regarded to Foot Flexion Extension (FLE/EXT), then the point where they start to increase represents the point when I start to move my foot. Viceversa, when I move back my foot it results on decreasing signals amplitude.
My job is to identify the FLE/EXT and I tried to examine first derivative but it appears to don't give me any useful information.
I also have tried to use a convolution with a fixed-lenght ones-array by looking for when the successive convulution's average is greater than the current one.
This approach has 2 constraints:
Fixed-lenght array: because when signals represents faster FLE/EXT (then in less temporale distance in x-axis) the window is not enough to catch variation.
Threshold's Criterion for choosing how much has to be the successive average respect to the current one in order to save this iteration for my purpose.
I have stuck here, because I want to use a dynamic threshold approach or something similar which can allow me to exclude any fixed thresholds.
I hope to have a discussion with you for solving my problem. What do you think?
Please, if something is unclear, I am ready to clarify better.
Best regards,
V

coding a deconvolution using python

Before I begin I have to tell you that I have zero knowledge about DSP in python.
I want to deconvolute two sound signals using python so that I can extract the room impulse response, the input signal being a sinesweep and the output a record of it.
I wrote a piece of code but it didn't work, I've been trying for too long and really without results.
Can someone please help me with a code that calculate the FFT of the input and output then calculate h the iFFT of their fraction and plot it.
Deconvolution is an ill-posed tough problem in presence of noise and spatially-variant blurring. I assume you have a non spatially variant problem, as far as you are using FFTs, so you can use restoration module from skimage python package (instead of programming the algorithm at low level with FFTs).
Here you can study a code example with one of the implemented methods in restoration module.
I recommend you to read O'Leary et al. book if you want to learn more. All authors of this book have more advanced books about this great topic.

How to get a time series based on a spectrogram in Python?

I have a time series and generate its spectrogram in Python with matplotlib.pyplot.specgram.
After I make some analysis and changes I need to convert the spectrogram back into time series.
Is there any function in matplotlib or in other library that I can use directly? Or if not, could you please elaborate on which direction I should work on?
Your warm help is appreciated.
Matplotlib is a library for plotting data. Generally if you're trying to do any computation you'd use a library suited for that.
numpy is a very popular library for doing numerical computation in Python. It just so happens they have a fairly extensive set of fft and ifft methods.
I would check them out here and see if they can solve your problem.
One thing commonly done (for example in the source separation community) is to use the phase data of the original signal (before transformation where applied to it) - the result is much better than null or random phase, and not so far from algorithms aiming at reconstructing the phase information from scratch.
A classic reconstruction algorithm is Griffin&Lim's, described in the paper "Signal estimation from modified short-time Fourier transform". This is an iterative algorithm, each iteration requires a full STFT / inverse STFT, which makes it quite costly.
This problem is indeed an active area of research, a search for STFT + reconstruction + magnitude will yield plenty of papers aiming at improving on Griffin&Lim in terms of signal quality and/or computational efficiency.
You can find detailed dicussion hereThread on DSP Stack Exchange

Binary Phase Shift Keying in Python

I'm currently working on some code to transmit messages/files/and other data over lasers using audio transformation. My current code uses the hexlify function from the binascii module in python to convert the data to binary, and then emits a tone for a 1 and a different tone for a 0. This in theory works, albeit not the fastest way to encode/decode, but in testing there proves to be a few errors.
the tones generated are not spot on, ie: emitting 150Hz can turn out to be 145-155Hz on the receiving end, this isn't a huge issue as I can just set the boundaries on the receiving end lower or higher.
the real problem is that if I emit a tone, and it is played, the computer on the receiving end may read it multiple times or not read it at all based on the rate it samples the incoming audio. I have tried to play the tones at the same speed it samples, but that is very iffy.
In all, I have had a couple of successful runs using short messages, but this is very unreliable and inaccurate due to the above mentioned issues.
I have looked into this further and a solution to this looks like it could involve BPSK or Binary Phase Shift Keying, although I'm not sure how to implement this. Any suggestions or code samples would be appreciated!
My code for the project can be found here but the main files I'm working on are for binary decoding and encoding which is here and here. I'm not an expert in python so please pardon me if anything I've said is wrong, my code isn't the best, or If i've overlooked something basic.
Thanks! :-)
Take a look at GNU Radio!
http://gnuradio.org/redmine/projects/gnuradio/wiki
GNU Radio is a project to do, in software, as much possible of radio signal transmission or reception. Because radio already uses phase shift keying, the GNU Radio guys have already solved the problem for you, and GNU Radio is already a Python project! And the complicated DSP stuff is written in C++ for speed, but wrapped for use in Python.
Here is a page discussing a project using Differential Binary Phase Shift Keying (DBPSK)/ Differential Quadrature Phase Shift Keying (DQPSK) to transmit binary data (in the example, a JPEG image). Python source code is available for download.
http://www.wu.ece.ufl.edu/projects/softwareRadio/
I see that your project is under the MIT license. GNU Radio is under GPL3, which may be a problem for you. You need to figure out if you can use GNU Radio without needing to make your project into a derived work, thus forcing you to change your license. It should be possible to make a standalone "sending daemon" and a standalone "receiving daemon", both of whose source code would be GPL3, and then have your MIT code connect to them over a socket or something.
By the way, one of my searches found this very clear explanation of how BPSK works:
http://cnx.org/content/m10280/latest/
Good luck!
In response to the first issue regarding the frequency:
Looking at your decoder, I see that your sample rate is 44100 and your chunk size is 2048. If am reading this right, that means your FFT size is 2048. That would put your FFT bin size at ~21hz. Have you tried to zero pad your FFT? Zero-padding the FFT won't change the frequency but will give you better resolution. I do see you are using a quadratic interpolation to improve your frequency estimate. I haven't used that technique, so I'm not familiar with the improvement you get from that. Maybe a balance between zero-padding and doing a quadratic interpolation will get you better frequency accuracy.
Also, depending on the hardware doing the transmission and receiving, the frequency error might be a result of different clocks driving the A/D - One or both of the clocks are not at exactly 44100Hz. Something like that might affect the frequency you see on your FFT output.

Categories