I have a time series and generate its spectrogram in Python with matplotlib.pyplot.specgram.
After I make some analysis and changes I need to convert the spectrogram back into time series.
Is there any function in matplotlib or in other library that I can use directly? Or if not, could you please elaborate on which direction I should work on?
Your warm help is appreciated.
Matplotlib is a library for plotting data. Generally if you're trying to do any computation you'd use a library suited for that.
numpy is a very popular library for doing numerical computation in Python. It just so happens they have a fairly extensive set of fft and ifft methods.
I would check them out here and see if they can solve your problem.
One thing commonly done (for example in the source separation community) is to use the phase data of the original signal (before transformation where applied to it) - the result is much better than null or random phase, and not so far from algorithms aiming at reconstructing the phase information from scratch.
A classic reconstruction algorithm is Griffin&Lim's, described in the paper "Signal estimation from modified short-time Fourier transform". This is an iterative algorithm, each iteration requires a full STFT / inverse STFT, which makes it quite costly.
This problem is indeed an active area of research, a search for STFT + reconstruction + magnitude will yield plenty of papers aiming at improving on Griffin&Lim in terms of signal quality and/or computational efficiency.
You can find detailed dicussion hereThread on DSP Stack Exchange
Related
Im trying to solve this for a couple of weeks now but it seems like Im not able to wrap my head around this. The task is pretty simple: Im getting a signal in voltage from a microfone and in the end I want to know how loud in dB(A) it is out there.
There are so many problems I dont even know where to start. Lets begin with my idea.
Im converting the voltsignal into a signal in pascal [Pa].
Using a FFT on that signal so I know which frequencies im dealing with.
Then somehow I should implement the A-Weighting on that, but since im handling my values in [Pa] I cant just multiply or add my A-Weighning.
Going with an iFFT and getting back to my timesignal.
Going from Pa to dB.
Calculate RMS and Im done. (Hopefully)
The main problem is the A-Weighting. I realy dont get the idea how I can implement it on a live signal? And since the FFT leads to complex values Im also a little confused by that.
Maybe you get the idea/problem/workflow and help me to at least getting a little bit closer to the goal.
A little disclaimer, I am 100% new to the world of acoustics so please make sure to explain it like you would explain it a little child :D and Im programming with python.
Thanks in advance for your time!
To give you a short answer. This task can be done in only a few steps, utilizing the waveform_analysis package and Parseval's theorem.
The most simple implementation I can come up with is:
Time domain A-weighting filtering the signal - Using this library -
import waveform_analysis
weighted_signal = waveform_analysis.A_weight(signal, fs)
Take the RMS of the signal (utilizing that the power of the time domain equals the power of the frequency domain - Parseval's theorem). -
import numpy as np
rms_value = np.sqrt(np.mean(np.abs(weighted_signal)**2))
Convert this amplitude to dB -
result = 20 * np.log10(rms_value)
This gives you the results in dB(A)FS, if you run these three snippets together.
To get the dB(A)Pa value, you need to know what 0 dBPa corresponds to in dBFS. This is usually done by having a calibrated source such as https://www.grasacoustics.com/products/calibration-equipment/product/756-42ag
One flaw of this implementation is not pre-windowing the time signal. This is, on the other hand, not an issue for sufficently long signals.
Hello Christian Fichtelberg and welcome to StackOveflow. I believe your question could be answered more easily in DSP StackExchange but I will try to provide some quick and dirty answer.
In order to avoid taking the signal to the frequency domain, do the multiplication there (I urge you to the fact that convolution in the time domain - where your signal "resides" - is equivalent to multiplication in the frequency domain. If unfamiliar with this please have a quick look in Wikipedia's convolution page) you can implement the A-Weighting filter in the time-domain and perform some kind of convolution there.
I won't go into the details of the possible pros and cons of each method (time-domain convolution vs frequency domain multiplication). You could have a search on DSP SE or look into some textbook on DSP (such as Oppenheim's Digital Signal Processing, or an equivalent book by Proakis and Manolakis).
According to IEC 61672-1:2013 the digital filter should be "translated" from the analogue filter (a good way to do so is to use the bilinear transform). The proposed filter is a quite "simple" IIR (Infinite Impulse Response) filter.
I will skip the implementation here as it has been provided by others. Please find a MATLAB implementation, a Python implementation (most probably what you are seeking for your application), a quite "high-level" answer on DSP SE with some links and information on designing filters for arbitrary sample rates on DSP SE.
Finally, I would like to mention that if you manage to create a ("smooth enough") polynomial approximation to the curve of the A-Weighting filter you could possibly perform a frequency domain multiplication of the frequency response and the polynomial to change the magnitude only of the spectrum and then perform an iFFT to go back to time domain. This should most probably provide an approximation to the A-Weighted signal. Please note here that this is NOT the correct way to do filtering so treat it with caution (if you decide to try it at all) and only as a quick solution to perform some checks.
Is there a python module that integrates simple chromatogram/trace analysis algorithms? I am looking for baseline correction, peak detection and peak integration functionality for simple time-courses (with data stored in numpy arrays).
I spent quite some time searching now and there doesn't seem to be any which really surprises me.
I'm not sure what analysis you are conducting but have you looked at PyLSS ?
It can (and I quote from the documentation):
PyLSS is able to compute:
LSS parameters (log kw and S)
Build and plot chromatograms from experimental/predicted retention times
Regarding peak detection and peak integration, I have used the functionality of the ptp() method in the Numpy module for this and I find that it is pretty powerful. Would this satisfy your requirement?
I am trying to implement something similar to what is detailed in this comment (https://github.com/scipy/scipy/issues/4940#issuecomment-109952602) (reposted here):
Anyway, a spike-train coded as a sum of delta functions is an analog signal. So when you bin the signal into rate or count histograms you will introduce aliasing, and your binning noise is the aliasing. The theory is explained in the classical French and Holden (1971) paper, though it is uncommon to use their "exact" anti-alias filter for reasons of efficiency. The easiest solution is to convolve each delta-function spike with a short FIR low-pass filter before you sample the spike train. http://link.springer.com/article/10.1007/BF00291117
I haven't figured out how to accomplish this. I looked into various NumPy and SciPy functions but they all assume you know what you're doing :)
Im working on EEG signal processing method for recognition of P300 ERP.
At the moment, Im training my classifier with a single vector of data that I get by averaging across preprocessed data from chosen subset of original 64 channels. Im using the values from EEG directly, not a frequency features from fft. The method actually got quite a solid performance of around 75% accurate classification.
I would like to improve it by using ICA to clean the EEG data a bit. I read through a lot of tutorials and papers and I am still kinda confused.
Im implementing my method in python so I chose to use sklearn's FastICA.
from sklearn.decomposition import FastICA
self.ica = FastICA(n_components=64,max_iter=300)
icaSignal = self.ica.fit_transform(self.signal)
From 25256 samples x 64 channels matrix I get matrix of original sources, that is also 25256x64. The problem is, that im not quite sure how to use the output.
Averaging those components and training a classifier same way as with signal reduces performance to less than 30%. So this is not probably the way.
Another way that I read about, is rejecting some of components at this point - the ones that represent eye blinks, muscle activity etc. Doing that based on their frequency and some other heuristics. - I also not quite confident about how to do that exactly.
After I reject some of the components, what is the next step? Should I try to average the ones that left and feed the classifier with them, or should i try to reconstruct the EEG signal without them now - if so, how to do that in python? I wasnt able to find any information about that reconstruction step. It is probably much easier to do in matlab so nobody bothered to write about it :(
Any suggestions? :)
Thank you very much!
I haven't used Python for ICA, but in turns of the steps, it shouldn't matter whether it's Matlab or Python.
You are completely right that it's hard to reject ICA components. There is no widely-accepted objective measurement. There are certain patterns for eye blinks (high voltage in frontal channels), muscle artifacts (wide spectrum coverage because it's EMG, at peripheral channels). If you don't know where to get started, I recommend reading the help of a Matlab plugin called EEGLAB. This UCSD group has some nice materials to help you start.
https://eeglab.org/
To answer your question on the ICA reconstruction: after rejecting some ICA components, you should reconstruct the original EEG without them.
I want to simulate a propagating wave with absorption and reflection on some bodies in three dimensional space. I want to do it with python. Should I use numpy? Are there some special libraries I should use?
How can I simulate the wave? Can I use the wave equation? But what if I have a reflection?
Is there a better method? Should I do it with vectors? But when the ray diverge the intensity gets lower. Difficult.
Thanks in advance.
If you do any computationally intensive numerical simulation in Python, you should definitely use NumPy.
The most general algorithm to simulate an electromagnetic wave in arbitrarily-shaped materials is the finite-difference time domain method (FDTD). It solves the wave equation, one time-step at a time, on a 3-D lattice. It is quite complicated to program yourself, though, and you are probably better off using a dedicated package such as Meep.
There are books on how to write your own FDTD simulations: here's one, here's a document with some code for 1-D FDTD and explanations on more than 1 dimension, and Googling "writing FDTD" will find you more of the same.
You could also approach the problem by assuming all your waves are plane waves, then you could use vectors and the Fresnel equations. Or if you want to model Gaussian beams being transmitted and reflected from flat or curved surfaces, you could use the ABCD matrix formalism (also known as ray transfer matrices). This takes into account the divergence of beams.
If you are solving 3D custom PDEs, I would recommend at least a look at FiPy. It'll save you the trouble of building a lot of your matrix conditioners and solvers from scratch. It uses numpy and/or trilinos. Here are some examples.
I recommend you use my project GarlicSim as the framework in which you build the simulation. You will still need to write your algorithm yourself, probably in Numpy, but GarlicSim may save you a bunch of boilerplate and allow you to explore your simulation results in a flexible way, similar to version control systems.
Don't use Python. I've tried using it for computationally expensive things and it just wasn't made for that.
If you need to simulate a wave in a Python program, write the necessary code in C/C++ and export it to Python.
Here's a link to the C API: http://docs.python.org/c-api/
Be warned, it isn't the easiest API in the world :)