Better method to perform numerical integration on acceleration - python

I have a set of acceleration data points read from the sensor.
I also have the time at which the reading was taken.
How do I numerically integrate to find the instantaneous velocity?
I have tried the following which does give me the result but the I am wondering whether there is a better more accurate method.
v_1=v_0+a*dt
Where dt is calculated from the difference between the times at which the data was measured.
And by iterating the above I could find the instantaneous velocity.

If you only have a number of discrete data points, it is reasonable to assume that the acceleration changes linearly between the data points, i.e.,
When integrating this function, the midpoint rule is completely accurate. (Midpoint is typically better than trapezoidal btw.)
You can get more fancy assuming that the acceleration is continuously differentiable in which case you'd have to construct a quadratic polynomial in each intersection and integrating that, resulting in Simpson's rule.

Related

How to integrate discrete function

I need to integrate a certain function that I have specified as discrete values for discrete arguments (I want to count the area under the graph I get).
I.e., from the earlier part of the code I have the literal:
args=[a1, a2, a3, a3]
valuses=[v1, v2, v3, v4]
where value v1 corresponds to a1, etc. If it's important, I have args set in advance with a specific discretization width, and I count values with a ready-made function.
I am attaching a figure.
And putting this function, which gave me a 'values' array, into integrate.quad() gives me an error:
IntegrationWarning: The maximum number of subdivisions (50) has been achieved. If increasing the limit yields no improvement it is advised to analyze
the integrand in order to determine the difficulties. If the position of a
local difficulty can be determined (singularity, discontinuity) one will
probably gain from splitting up the interval and calling the integrator on the subranges. Perhaps a special-purpose integrator should be used.
How can I integrate this? I'm mulling over the scipy documentation, but I can't seem to put it together. Because, after all, args themselves are already discretized by a finite number.
I am guessing that before passing the integral to quad you did some kind of interpolation on it. In general this is a misguided approach.
Integration and interpolation is very closely related. An integral requires you to compute the area under the curve and thus you must know the value of the function at any given point. Hence, starting from a set of data it is natural to want to interpolate it first. Yet the quad routine does not know that you started with a limited set of data, it just assumes that the function you gave it is "perfect" and it will do its best to compute the area under it! However the interpolated function is just a guess on what the values are between given points and thus integrating an interpolated function is a waste of time.
As MB-F said, in the discrete case you should simply sum up the points while multiplying them by the step size between them. You can do this the naïve way by pretending that the function is just rectangles. Or you can do what MB-F suggested which pretend that all the data points are connected with straight lines. Going one step further is pretending that the line connecting the data points is smooth (often true for physical systems) and use simpson integration implemented by scipy
Since you only have a discrete approximation of the function, integration reduces to summation.
As a simple approximation of an integral, try this:
midpoints = (values[:-1] + values[1:]) / 2
steps = np.diff(args)
area = np.sum(midpoints * steps)
(Assuming args and values are numpy arrays and the function is value = f(arg).)
That approach sums the areas of all trapezoids between adjacent data points (Wikipedia).

How to decide between scipy.integrate.simps or numpy.trapz?

I have a set of points, of which when I plot I get the graph below. I would like to find the area under the graph, however I am not sure whether scipy.integrate.simps or numpy.trapz is more suitable.
Could someone advice me on the mathematical background between the two functions and thus the conclusion on which function is more accurate?
The trapezoidal rule is the simplest of numerical integration methods. In effect, it estimates the area under a curve by approximating the curve with straight line segments, which only requires two points for each segment. Simpson's rule uses quadratic curves to approximate the function segments instead, each of which requires three points, sampled from your function, to approximate a given segment.
So what is the error associated with using these numerical methods as approximations to an analytical integral?
The error associated with the trapezoidal rule, to leading order, is proportional to h^2[f'(a) - f'(b)]. h is the spacing between sampled points in your function; f'(a) and f'(b) are the first derivative of your function at the beginning and end of the sampling domain.
The error through Simpson's rule, on the other hand, is proportional to h^4[f'''(a) - f'''(b)]. f''' is the third-order derivative in your function.
h is typically small, so h^4 is typically much smaller than h^2!
TLDR: Simpson's rule typically gives far superior results for numerical integration, compared to the trapezoidal rule, with basically no additional computational cost.

Intelligent fitting with python

Is there a more intelligent function than scipy.optimize.curve_fit in Python?
I also need to define a function to fit data with.
I've spend ages trying to fit data with it. I can fit only basic functions and fitting two lines with piecewise function is impossible while the y-axis has low values like 0.01-0.05 and x-axis values like 20-60.
I know I have to plug in initial values, but still it takes too much time and sometimes it does not work.
EDIT
I added graph where are data I fitted and you can see the effect of changing bounds in scipy.optimize.curve_fit.
The function I fit with is this one:
def abslines(x,a,b,c,d):
return np.piecewise(x, [x < -b/a, x >= -b/a], [lambda x: a*x+b+d, lambda x: c*(x+b/a)+d])
Initial conditions are same everytime and I think they are close enough:
p0=[-0.001,0.2,0.005,0.]
because the values of parameters from best fit are:
[-0.00411946 0.19895546 0.00817832 0.00758401]
Bounds are:
No bounds;
bounds=([-1.,0.,0.,0.],[0.,1.,1.,1.])
bounds=([-0.5,0.01,0.0001,0.],[-0.001,0.5,0.5,1.])
bounds=([-0.1,0.01,0.0001,0.],[-0.001,0.5,0.1,1.])
bounds=([-0.01,0.1,0.0001,0.],[-0.001,0.5,0.1,1.])
starting with no bounds, end with best bounds
Still I think, that this takes too much time and curve_fit can find it better. This way I have to almost specify the function and it seems like I am fitting by changing parameters not that curve_fit is fitting.
Without knowing what is exactly the regression algorithm in Python it is quite impossible to give a definitive answer. Probably the calculus is iterative and requires initial guesses, which are probably derived from the specified bounds. So, the bounds have an indirect effect on the convergence and the results.
I suggest to try a simpler algorithm (not iterative, no initial guess) coming from this paper : https://fr.scribd.com/document/380941024/Regression-par-morceaux-Piecewise-Regression-pdf
The code is easy to write in any computer language. I suppose this can be done with Python as well.
The piecewise function to be fitted is :
The parameters to be computed are a1, p1, q1, p2 and q2.
The result is shown on the next figure, with the approximate values of the parameters.
So that, no bounds are required to be specified and as a consequence no problems related to bounds.
NOTE : The method is based on the fitting of a convenient integral equation such as shown in the above referenced paper. The numerical calculus of the integral is subjected to deviations if the number of points is too small. In the present case, they are a large number of points. So, even scattered this is a favourable case for the practical application of this method.
1.Algorithms behind curve_fit expect differentiable functions, thus it can go south if given a non-differential one.
For a more powerful interface to curve fitting, have a look at lmfit.

How can I discriminate the falling edge of a signal with Python?

I am working on discriminating some signals for the calculation of the free-period oscillation and damping ratio of a spring-mass system (seismometer). I am using Python as the main processing program. What I need to do is import this signal, parse the signal and find the falling edge, then return a list of the peaks from the tops of each oscillation as a list so that I can calculate the damping ratio. The calculation of the free-period is fairly straightforward once I've determined the location of the oscillation within the dataset.
Where I am mostly hung up are how to parse through the list, identify the falling edge, and then capture each of the Z0..Zn elements. The oscillation frequency can be calculated using an FFT fairly easily, once I know where that falling edge is, but if I process the whole file, a lot of energy from the energizing of the system before the release can sometimes force the algorithm to kick out an ultra-low frequency that represents the near-DC offset, rather than the actual oscillation. (It's especially a problem at higher damping ratios where there might be only four or five measurable rebounds).
Has anyone got some ideas on how I can go about this? Right now, the code below uses the arbitrarily assigned values for the signal in the screenshot. However I need to have code calculate those values. Also, I haven't yet determined how to create my list of peaks for the calculation of the damping ratio h. Your help in getting some ideas together for solving this would be very welcome. Due to the fact I have such a small Stackoverflow reputation, I have included my signal in a sample screenshot at the following link:
(Boy, I hope this works!)
Tycho's sample signal -->
https://github.com/tychoaussie/Sigcal_v1/blob/066faca7c3691af3f894310ffcf3bbb72d730601/Freeperiod_Damping%20Ratio.jpg
##########################################################
import os, sys, csv
from scipy import signal
from scipy.integrate import simps
import pylab as plt
import numpy as np
import scipy as sp
#
# code goes here that imports the data from the csv file into lists.
# One of those lists is called laser, a time-history list in counts from an analog-digital-converter
# sample rate is 130.28 samples / second.
#
#
# Find the period of the observed signal
#
delta = 0.00767 # Calculated elsewhere in code, represents 130.28 samples/sec
# laser is a list with about 20,000 elements representing time-history data from a laser position sensor
# The release of the mass and system response occurs starting at sample number 2400 in this particular instance.
sense = signal.detrend(laser[2400:(2400+8192)]) # Remove the mean of the signal
N = len(sense)
W = np.fft.fft(sense)
freq = np.fft.fftfreq(len(sense),delta) # First value represents the number of samples and delta is the sample rate
#
# Take the sample with the largest amplitude as our center frequency.
# This only works if the signal is heavily sinusoidal and stationary
# in nature, like our calibration data.
#
idx = np.where(abs(W)==max(np.abs(W)))[0][-1]
Frequency = abs(freq[idx]) # Frequency in Hz
period = 1/(Frequency*delta) # represents the number of samples for one cycle of the test signal.
#
# create an axis representing time.
#
dt = [] # Create an x axis that represents elapsed time in seconds. delta = seconds per sample, i represents sample count
for i in range(0,len(sensor)):
dt.append(i*delta)
#
# At this point, we know the frequency interval, the delta, and we have the arrays
# for signal and laser. We can now discriminate out the peaks of each rebound and use them to process the damping ratio of either the 'undamped' system or the damping ratio of the 'electrically damped' system.
#
print 'Frequency calcuated to ',Frequency,' Hz.'
Here's a somewhat unconventional idea, which I think is might be fairly robust and doesn't require a lot of heuristics and guesswork. The data you have is really high quality and fits a known curve so that helps a lot here. Here I assume the "good part" of your curve has the form:
V = a * exp(-γ * t) * cos(2 * π * f * t + φ) + V0 # [Eq1]
V: voltage
t: time
γ: damping constant
f: frequency
a: starting amplitude
φ: starting phase
V0: DC offset
Outline of the algorithm
Getting rid of the offset
Firstly, calculate the derivative numerically. Since the data quality is quite high, the noise shouldn't affect things too much.
The voltage derivative V_deriv has the same form as the original data: same frequency and damping constant, but with a different phase ψ and amplitude b,
V_deriv = b * exp(-γ * t) * cos(2 * π * f * t + ψ) # [Eq2]
The nice thing is that this will automatically get rid of your DC offset.
Alternative: This step isn't completely needed – the offset is a relatively minor complication to the curve fitting, since you can always provide a good guess for an offset by averaging the entire curve. Trade-off is you get a better signal-to-noise if you don't use the derivative.
Preliminary guesswork
Now consider the curve of the derivative (or the original curve if you skipped the last step). Start with the data point on the very far right, and then follow the curve leftward as many oscillations as you can until you reach an oscillation whose amplitude is greater than some amplitude threshold A. You want to find a section of the curve containing one oscillation with a good signal-to-noise ratio.
How you determine the amplitude threshold is difficult to say. It depends on how good your sensors are. I suggest leaving this as a parameter for tweaking later.
Curve-fitting
Now that you have captured one oscillation, you can do lots of easy things: estimate the frequency f and damping constant γ. Using these initial estimates, you can perform a nonlinear curve fit of all the data to the right of this oscillation (including the oscillation itself).
The function that you fit is [Eq2] if you're using the derivative, or [Eq1] if you use the original curve (but they are the same functions anyway, so rather moot point).
Why do you need these initial estimates? For a nonlinear curve fit to a decaying wave, it's critical that you give a good initial guess for the parameters, especially the frequency and damping constant. The other parameters are comparatively less important (at least that's my experience), but here's how you can get the others just in case:
The phase is 0 if you start from a maximum, or π if you start at a minimum.
You can guess the starting amplitude too, though the fitting algorithm will typically do fine even if you just set it to 1.
Finding the edge
The curve fit should be really good at this point – you can tell because discrepancy between your curve and the fit is very low. Now the trick is to attempt to increase the domain of the curve to the left.
If you stay within the sinusoidal region, the curve fit should remain quite good (how you judge the "goodness" requires some experimenting). As soon as you hit the flat region of the curve, however, the errors will start to increase dramatically and the parameters will start to deviate. This can be used to determine where the "good" data ends.
You don't have to do this point by point though – that's quite inefficient. A binary search should work quite well here (possibly the "one-sided" variation of it).
Summary
You don't have to follow this exact procedure, but the basic gist is that you can perform some analysis on a small part of the data starting on the very right, and then gradually increase the time domain until you reach a point where you find that the errors are getting larger rather than smaller as they ought to be.
Furthermore, you can also combine different heuristics to see if they agree with each other. If they don't, then it's probably a rather sketchy data point that requires some manual intervention.
Note that one particular advantage of the algorithm I sketched above is that you will get the results you want (damping constant and frequency) as part of the process, along with uncertainty estimates.
I neglected to mention most of the gory mathematical and algorithmic details so as to provide a general overview, but if needed I can provide more detail.

Does gaussian_filter1d not work well in higher orders?

Ok, so what I'm trying to do is a scale space on a 1D set of data where the entire data set presumably is taken from a sum of gaussians function. To do this, I have to apply a gaussian convolution to the data set. My end goal is to find the number of gaussians in this data set by the number of zero crossings in the second order derivative of the convoluted data. The reasons for this come from this article.
Now the problem occurs with scipy's gaussian_filter1d that I'm using to do the convolution. I assume that when it says filter it only means a convolution with a gaussian because there is already a separate function for fourier_gaussian_filter. In addition, to avoid approximation, I'm using the gaussian_filter1d's own 2nd order derivative and then apply the convolution. The problem occurs when I keep lowering the sigma of the gaussian filter that you would assume that it would act more like an dirac delta. And this is what actually occurs at smaller values of sigma in the zero order derivate. Unfortunately, when I apply a 2nd order derivate gaussian filter, the data does not have the zero-crossings that I expect it to. In fact, it doesn't have any zero crossings even when there is only one gaussian in the original data.
Some possible ideas that came to me about what could be the problem is that an actual delta function doesn't have a derivative and that the derivative of a really small sigma Gaussian can't approximate the derivative of a delta. But I wanted to hear the community's thoughts on the problem. Thank you for reading this post.

Categories