Scipy implementation of Savitzky-Golay filter - python

I was looking at the scipy cookbook implementation of the Savitzky-Golay algorithm:
#!python
def savitzky_golay(y, window_size, order, deriv=0, rate=1):
r"""Smooth (and optionally differentiate) data with a Savitzky-Golay filter.
The Savitzky-Golay filter removes high frequency noise from data.
It has the advantage of preserving the original shape and
features of the signal better than other types of filtering
approaches, such as moving averages techniques.
Parameters
----------
y : array_like, shape (N,)
the values of the time history of the signal.
window_size : int
the length of the window. Must be an odd integer number.
order : int
the order of the polynomial used in the filtering.
Must be less then `window_size` - 1.
deriv: int
the order of the derivative to compute (default = 0 means only smoothing)
Returns
-------
ys : ndarray, shape (N)
the smoothed signal (or it's n-th derivative).
Notes
-----
The Savitzky-Golay is a type of low-pass filter, particularly
suited for smoothing noisy data. The main idea behind this
approach is to make for each point a least-square fit with a
polynomial of high order over a odd-sized window centered at
the point.
Examples
--------
t = np.linspace(-4, 4, 500)
y = np.exp( -t**2 ) + np.random.normal(0, 0.05, t.shape)
ysg = savitzky_golay(y, window_size=31, order=4)
import matplotlib.pyplot as plt
plt.plot(t, y, label='Noisy signal')
plt.plot(t, np.exp(-t**2), 'k', lw=1.5, label='Original signal')
plt.plot(t, ysg, 'r', label='Filtered signal')
plt.legend()
plt.show()
References
----------
.. [1] A. Savitzky, M. J. E. Golay, Smoothing and Differentiation of
Data by Simplified Least Squares Procedures. Analytical
Chemistry, 1964, 36 (8), pp 1627-1639.
.. [2] Numerical Recipes 3rd Edition: The Art of Scientific Computing
W.H. Press, S.A. Teukolsky, W.T. Vetterling, B.P. Flannery
Cambridge University Press ISBN-13: 9780521880688
"""
import numpy as np
from math import factorial
try:
window_size = np.abs(np.int(window_size))
order = np.abs(np.int(order))
except ValueError, msg:
raise ValueError("window_size and order have to be of type int")
if window_size % 2 != 1 or window_size < 1:
raise TypeError("window_size size must be a positive odd number")
if window_size < order + 2:
raise TypeError("window_size is too small for the polynomials order")
order_range = range(order+1)
half_window = (window_size -1) // 2
# precompute coefficients
b = np.mat([[k**i for i in order_range] for k in range(-half_window, half_window+1)])
m = np.linalg.pinv(b).A[deriv] * rate**deriv * factorial(deriv)
# pad the signal at the extremes with
# values taken from the signal itself
firstvals = y[0] - np.abs( y[1:half_window+1][::-1] - y[0] )
lastvals = y[-1] + np.abs(y[-half_window-1:-1][::-1] - y[-1])
y = np.concatenate((firstvals, y, lastvals))
return np.convolve( m[::-1], y, mode='valid')
This is the part that confuses me:
firstvals = y[0] - np.abs( y[1:half_window+1][::-1] - y[0] )
lastvals = y[-1] + np.abs(y[-half_window-1:-1][::-1] - y[-1])
y = np.concatenate((firstvals, y, lastvals))
I get that we need to 'pad' y, since otherwise the first window_size/2 points would be excluded, but I don't see the point of subtracting a particular value's absolute difference with y[0] from y[0].
I don't think the absolute value should be there, as otherwise, the trend gets mirrored horizontally if it starts by increasing, and vertically if it starts by decreasing.
As pointed it out by #ImportanceOfBeingErnest, this may be a typo in the code, as can be seen by looking at the left hand side of the plot in the page I linked to.

Indeed, this logic isn't right, which can be best seen by considering the case of y[0] and y[-1] being 0. I believe the intent was to achieve odd reflection, so that the first derivative would be continuous at the reflection point. The correct form for that is
firstvals = 2*y[0] - y[1:half_window+1][::-1]
lastvals = 2*y[-1] - y[-half_window-1:-1][::-1]
or, combining reversing and slicing in one step,
firstvals = 2*y[0] - y[half_window:0:-1]
lastvals = 2*y[-1] - y[-2:-half_window-2:-1]
I should emphasize this is just some code contributed by a user. The actual Scipy implementation of Savitzky-Golay filter is entirely different.

Related

How to fit a piecewise (alternating linear and constant segments) function to a parabolic function?

I do have a function, for example , but this can be something else as well, like a quadratic or logarithmic function. I am only interested in the domain of . The parameters of the function (a and k in this case) are known as well.
My goal is to fit a continuous piece-wise function to this, which contains alternating segments of linear functions (i.e. sloped straight segments, each with intercept of 0) and constants (i.e. horizontal segments joining the sloped segments together). The first and last segments are both sloped. And the number of segments should be pre-selected between around 9-29 (that is 5-15 linear steps + 4-14 constant plateaus).
Formally
The input function:
The fitted piecewise function:
I am looking for the optimal resulting parameters (c,r,b) (in terms of least squares) if the segment numbers (n) are specified beforehand.
The resulting constants (c) and the breakpoints (r) should be whole natural numbers, and the slopes (b) round two decimal point values.
I have tried to do the fitting numerically using the pwlf package using a segmented constant models, and further processed the resulting constant model with some graphical intuition to "slice" the constant steps with the slopes. It works to some extent, but I am sure this is suboptimal from both fitting perspective and computational efficiency. It takes multiple minutes to generate a fitting with 8 slopes on the range of 1-50000. I am sure there must be a better way to do this.
My idea would be to instead using only numerical methods/ML, the fact that we have the algebraic form of the input function could be exploited in some way to at least to use algebraic transforms (integrals) to get to a simpler optimization problem.
import numpy as np
import matplotlib.pyplot as plt
import pwlf
# The input function
def input_func(x,k,a):
return np.power(x,1/a)*k
x = np.arange(1,5e4)
y = input_func(x, 1.8, 1.3)
plt.plot(x,y);
def pw_fit(func, x_r, no_seg, *fparams):
# working on the specified range
x = np.arange(1,x_r)
y_input = func(x, *fparams)
my_pwlf = pwlf.PiecewiseLinFit(x, y_input, degree=0)
res = my_pwlf.fit(no_seg)
yHat = my_pwlf.predict(x)
# Function values at the breakpoints
y_isec = func(res, *fparams)
# Slope values at the breakpoints
slopes = np.round(y_isec / res, decimals=2)
slopes = slopes[1:]
# For the first slope value, I use the intersection of the first constant plateau and the input function
slopes = np.insert(slopes,0,np.round(y_input[np.argwhere(np.diff(np.sign(y_input - yHat))).flatten()[0]] / np.argwhere(np.diff(np.sign(y_input - yHat))).flatten()[0], decimals=2))
plateaus = np.unique(np.round(yHat))
# If due to rounding slope values (to two decimals), there is no change in a subsequent step, I just remove those segments
to_del = np.argwhere(np.diff(slopes) == 0).flatten()
slopes = np.delete(slopes,to_del + 1)
plateaus = np.delete(plateaus,to_del)
breakpoints = [np.ceil(plateaus[0]/slopes[0])]
for idx, j in enumerate(slopes[1:-1]):
breakpoints.append(np.floor(plateaus[idx]/j))
breakpoints.append(np.ceil(plateaus[idx+1]/j))
breakpoints.append(np.floor(plateaus[-1]/slopes[-1]))
return slopes, plateaus, breakpoints
slo, plat, breaks = pw_fit(input_func, 50000, 8, 1.8, 1.3)
# The piecewise function itself
def pw_calc(x, slopes, plateaus, breaks):
x = x.astype('float')
cond_list = [x < breaks[0]]
for idx, j in enumerate(breaks[:-1]):
cond_list.append((j <= x) & (x < breaks[idx+1]))
cond_list.append(breaks[-1] <= x)
func_list = [lambda x: x * slopes[0]]
for idx, j in enumerate(slopes[1:]):
func_list.append(plateaus[idx])
func_list.append(lambda x, j=j: x * j)
return np.piecewise(x, cond_list, func_list)
y_output = pw_calc(x, slo, plat, breaks)
plt.plot(x,y,y_output);
(Not important, but I think the fitted piecewise function is not continuous as it is. Intervals should be x<=r1; r1<x<=r2; ....)
As Anatolyg has pointed out, it looks to me that in the optimal solution (for the function posted at least, and probably for any where the derivative is different from zero), the horizantal segments will collapse to a point or the minimum segment length (in this case 1).
EDIT---------------------------------------------
The behavior above could only be valid if the slopes could have an intercept. If the intercepts are zero, as posted in the question, one consideration must be taken into account: Is the initial parabolic function defined in zero or nearby? Imagine the function y=0.001 *sqrt(x-1000), then the segments defined as b*x will have a slope close to zero and will be so similar to the constant segments that the best fit will be just the line that without intercept that fits better all the function.
Provided that the function is defined in zero or nearby, you can start by approximating the curve just by linear segments (with intercepts):
divide the function domain in N intervals(equal intervals or whose size is a function of the average curvature (or second derivative) of the function along the domain).
linear fit/regression in each intervals
for each interval, if a point (or bunch of points) in the extreme of any interval is better fitted by the line of the neighbor interval than the line of its interval, this point is assigned to the neighbor interval.
Repeat from 2) until no extreme points are moved.
Linear regressions might be optimized not to calculate all the covariance matrixes from scratch on each iteration, but just adding the contributions of the moved points to the previous covariance matrixes.
Then each linear segment (LSi) is replaced by a combination of a small constant segment at the beginning (Cbi), a linear segment without intercept (Si), and another constant segment at the end (Cei). This segments are easy to calculate as Si will contain the middle point of LSi, and Cbi and Cei will have respectively the begin and end values of the segment LSi. Then the intervals of each segment has to be calculated as an intersection between lines.
With this, the constant end segment will be collinear with the constant begin segment from the next interval so they will merge, resulting in a series of constant and linear segments interleaved.
But this would be a floating point start solution. Next, you will have to apply all the roundings which will mess up quite a lot all the segments as the conditions integer intervals and linear segments without slope can be very confronting. In fact, b,c,r are not totally independent. If ci and ri+1 are known, then bi+1 is already fixed
If nothing is broken so far, the final task will be to minimize the error/cost function (I assume that it will be the integral of the error between the parabolic function and the segments). My guess is that gradients here will be quite a pain, as if you change for example one ci, all the rest of the bj and cj will have to adapt as well due to the integer intervals restriction. However, if you can generalize the derivatives between parameters ( how much do I have to adapt bi+1 if ci changes a unit), you can propagate the change of one parameter to all other parameters and have kind of a gradient. Then for each interval, you can estimate what would be the ideal parameter and averaging all intervals calculate the best gradient step. Let me illustrate this:
Assuming first that r parameters are fixed, if I change c1 by one unit, b2 changes by 0.1, c2 changes by -0.2 and b3 changes by 0.2. This would be the gradient.
Then I estimate, comparing with the parabolic curve, that c1 should increase 0.5 (to reduce the cost by 10 points), b2 should increase 0.2 (to reduce the cost by 5 points), c2 should increase 0.2 (to reduce the cost by 6 points) and b3 should increase 0.1 (to reduce the cost by 9 points).
Finally, the gradient step would be (0.5/1·10 + 0.2/0.1·5 - 0.2/(-0.2)·6 + 0.1/0.2·9)/(10 + 5 + 6 + 9)~= 0.45. Thus, c1 would increase 0.45 units, b2 would increase 0.45·0.1, and so on.
When you add the r parameters to the pot, as integer intervals do not have an proper derivative, calculation is not straightforward. However, you can consider r parameters as floating points, calculate and apply the gradient step and then apply the roundings.
We can integrate the squared error function for linear and constant pieces and let SciPy optimize it. Python 3:
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize
xl = 1
xh = 50000
a = 1.3
p = 1 / a
n = 8
def split_b_and_c(bc):
return bc[::2], bc[1::2]
def solve_for_r(b, c):
r = np.empty(2 * n)
r[0] = xl
r[1:-1:2] = c / b[:-1]
r[2::2] = c / b[1:]
r[-1] = xh
return r
def linear_residual_integral(b, x):
return (
(x ** (2 * p + 1)) / (2 * p + 1)
- 2 * b * x ** (p + 2) / (p + 2)
+ b ** 2 * x ** 3 / 3
)
def constant_residual_integral(c, x):
return x ** (2 * p + 1) / (2 * p + 1) - 2 * c * x ** (p + 1) / (p + 1) + c ** 2 * x
def squared_error(bc):
b, c = split_b_and_c(bc)
r = solve_for_r(b, c)
linear = np.sum(
linear_residual_integral(b, r[1::2]) - linear_residual_integral(b, r[::2])
)
constant = np.sum(
constant_residual_integral(c, r[2::2])
- constant_residual_integral(c, r[1:-1:2])
)
return linear + constant
def evaluate(x, b, c, r):
i = 0
while x > r[i + 1]:
i += 1
return b[i // 2] * x if i % 2 == 0 else c[i // 2]
def main():
bc0 = (xl + (xh - xl) * np.arange(1, 4 * n - 2, 2) / (4 * n - 2)) ** (
p - 1 + np.arange(2 * n - 1) % 2
)
bc = scipy.optimize.minimize(
squared_error, bc0, bounds=[(1e-06, None) for i in range(2 * n - 1)]
).x
b, c = split_b_and_c(bc)
r = solve_for_r(b, c)
X = np.linspace(xl, xh, 1000)
Y = [evaluate(x, b, c, r) for x in X]
plt.plot(X, X ** p)
plt.plot(X, Y)
plt.show()
if __name__ == "__main__":
main()
I have tried to come up with a new solution myself, based on the idea of #Amo Robb, where I have partitioned the domain, and curve fitted a dual - constant and linear - piece together (with the help of np.maximum). I have used the 1 / f(x)' as the function to designate the breakpoints, but I know this is arbitrary and does not provide a global optimum. Maybe there is some optimal function for these breakpoints. But this solution is OK for me, as it might be appropriate to have a better fit at the first segments, at the expense of the error for the later segments. (The task itself is actually a cost based retail margin calculation {supply price -> added margin}, as the retail POS software can only work with such piecewise margin function).
The answer from #David Eisenstat is correct optimal solution if the parameters are allowed to be floats. Unfortunately the POS software can not use floats. It is OK to round up c-s and r-s afterwards. But the b-s should be rounded to two decimals, as those are inputted as percents, and this constraint would ruin the optimal solution with long floats. I will try to further improve my solution with both Amo's and David's valuable input. Thank You for that!
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
# The input function f(x)
def input_func(x,k,a):
return np.power(x,1/a) * k
# 1 / f(x)'
def one_per_der(x,k,a):
return a / (k * np.power(x, 1/a-1))
# 1 / f(x)' inverted
def one_per_der_inv(x,k,a):
return np.power(a / (x*k), a / (1-a))
def segment_fit(start,end,y,first_val):
b, _ = curve_fit(lambda x,b: np.maximum(first_val, b*x), np.arange(start,end), y[start-1:end-1])
b = float(np.round(b, decimals=2))
bp = np.round(first_val / b)
last_val = np.round(b * end)
return b, bp, last_val
def pw_fit(end_range, no_seg, **fparams):
y_bps = np.linspace(one_per_der(1, **fparams), one_per_der(end_range,**fparams) , no_seg+1)[1:]
x_bps = np.round(one_per_der_inv(y_bps, **fparams))
y = input_func(x, **fparams)
slopes = [np.round(float(curve_fit(lambda x,b: x * b, np.arange(1,x_bps[0]), y[:int(x_bps[0])-1])[0]), decimals = 2)]
plats = [np.round(x_bps[0] * slopes[0])]
bps = []
for i, xbp in enumerate(x_bps[1:]):
b, bp, last_val = segment_fit(int(x_bps[i]+1), int(xbp), y, plats[i])
slopes.append(b); bps.append(bp); plats.append(last_val)
breaks = sorted(list(x_bps) + bps)[:-1]
# If due to rounding slope values (to two decimals), there is no change in a subsequent step, I just remove those segments
to_del = np.argwhere(np.diff(slopes) == 0).flatten()
breaks_to_del = np.concatenate((to_del * 2, to_del * 2 + 1))
slopes = np.delete(slopes,to_del + 1)
plats = np.delete(plats[:-1],to_del)
breaks = np.delete(breaks,breaks_to_del)
return slopes, plats, breaks
def pw_calc(x, slopes, plateaus, breaks):
x = x.astype('float')
cond_list = [x < breaks[0]]
for idx, j in enumerate(breaks[:-1]):
cond_list.append((j <= x) & (x < breaks[idx+1]))
cond_list.append(breaks[-1] <= x)
func_list = [lambda x: x * slopes[0]]
for idx, j in enumerate(slopes[1:]):
func_list.append(plateaus[idx])
func_list.append(lambda x, j=j: x * j)
return np.piecewise(x, cond_list, func_list)
fparams = {'k':1.8, 'a':1.2}
end_range = 5e4
no_steps = 10
x = np.arange(1, end_range)
y = input_func(x, **fparams)
slopes, plats, breaks = pw_fit(end_range, no_steps, **fparams)
y_output = pw_calc(x, slopes, plats, breaks)
plt.plot(x,y_output,y);

Python Curve Smoothing using Savitzky_Golay - issue

Input File CSV data link. My Python code is as pasted beneath. The curve smoothing technique does not really seem to be working. As I plot the smoothed curve upon the parent data, they overlap exactly. Could someone please help me in resolving the issue please. The code uses the Savitzky_Golay algorithm.The code extracts the x,y axis data from a csv file and is formulated to suite the required data type needed for the Savitzky_Golay function call
import numpy as np
import csv
from math import factorial
import itertools
import matplotlib.pyplot as plt
from matplotlib.ticker import FormatStrFormatter
#from scipy.interpolate import spline
#import openpyxl
#import pandas as pd
#from scipy.interpolate import interp1d
def savitzky_golay(y, window_size, order, deriv=0, rate=1):
r"""Smooth (and optionally differentiate) data with a Savitzky-Golay filter.
The Savitzky-Golay filter removes high frequency noise from data.
It has the advantage of preserving the original shape and
features of the signal better than other types of filtering
approaches, such as moving averages techniques.
Parameters
----------
y : array_like, shape (N,)
the values of the time history of the signal.
window_size : int
the length of the window. Must be an odd integer number.
order : int
the order of the polynomial used in the filtering.
Must be less then `window_size` - 1.
deriv: int
the order of the derivative to compute (default = 0 means only smoothing)
Returns
-------
ys : ndarray, shape (N)
the smoothed signal (or it's n-th derivative).
Notes
-----
The Savitzky-Golay is a type of low-pass filter, particularly
suited for smoothing noisy data. The main idea behind this
approach is to make for each point a least-square fit with a
polynomial of high order over a odd-sized window centered at
the point.
Examples
--------
t = np.linspace(-4, 4, 500)
y = np.exp( -t**2 ) + np.random.normal(0, 0.05, t.shape)
ysg = savitzky_golay(y, window_size=31, order=4)
import matplotlib.pyplot as plt
plt.plot(t, y, label='Noisy signal')
plt.plot(t, np.exp(-t**2), 'k', lw=1.5, label='Original signal')
plt.plot(t, ysg, 'r', label='Filtered signal')
plt.legend()
plt.show()
References
----------
.. [1] A. Savitzky, M. J. E. Golay, Smoothing and Differentiation of
Data by Simplified Least Squares Procedures. Analytical
Chemistry, 1964, 36 (8), pp 1627-1639.
.. [2] Numerical Recipes 3rd Edition: The Art of Scientific Computing
W.H. Press, S.A. Teukolsky, W.T. Vetterling, B.P. Flannery
Cambridge University Press ISBN-13: 9780521880688
"""
try:
window_size = np.abs(np.int(window_size))
order = np.abs(np.int(order))
except ValueError, msg:
raise ValueError("window_size and order have to be of type int")
if window_size % 2 != 1 or window_size < 1:
raise TypeError("window_size size must be a positive odd number")
if window_size < order + 2:
raise TypeError("window_size is too small for the polynomials order")
order_range = range(order+1)
half_window = (window_size -1) // 2
# precompute coefficients
b = np.mat([[k**i for i in order_range] for k in range(-half_window, half_window+1)])
m = np.linalg.pinv(b).A[deriv] * rate**deriv * factorial(deriv)
# pad the signal at the extremes with
# values taken from the signal itself
firstvals = y[0] - np.abs( y[1:half_window+1][::-1] - y[0] )
lastvals = y[-1] + np.abs(y[-half_window-1:-1][::-1] - y[-1])
y = np.concatenate((firstvals, y, lastvals))
return np.convolve( m[::-1], y, mode='valid')
CurveName_1 = "Actual"
ind1, ind2 = 0,0
check = 0
for line in open('C:\Users\XYZ\Documents\FileTransfers\Vicky.csv'):
csv_row = line.split(",")
csv_row = map(str.strip, csv_row)
csv_row = [i.replace('"', '') for i in csv_row]
if CurveName_1 in csv_row:
ind1 = csv_row.index(CurveName_1)
check += 1
if check > 1:
break
x = []
y = []
with open( 'C:\Users\XYZ\Documents\FileTransfers\Vicky.csv', "r") as file:
reader = csv.reader(file)
for idx,line in enumerate(reader):
if idx>3:
x.append(float(line[ind1]))
y.append(float(line[0]))
#t=line[ind1],line[ind2]
#print t
print len(x)
print len(y)
xm = np.array(x)
ym = np.array(y)
#ym = np.array(ym)
yhat = savitzky_golay(ym, 51, 3) # window size 51, polynomial order 3
plt.grid(True)
plt.minorticks_on()
# Customize the major grid
plt.grid(which='major', linestyle='-', linewidth='0.5', color='red')
# Customize the minor grid
plt.grid(which='minor', linestyle=':', linewidth='0.5', color='black')
axes = plt.subplot(111)
axes.xaxis.set_minor_formatter(FormatStrFormatter("%.2f"))
axes.yaxis.set_minor_formatter(FormatStrFormatter("%.2f"))
plt.plot(yhat, xm)
plt.plot(ym,xm, color='red')
plt.show()
Maybe a little late, and maybe not the exact answer to your question, but for a very similar application I use pandas.read_excel to import the data and scipy.signal.savgol_filter for filtering: the less I implement, the more chances it has of working properly...

Rotating 1D numpy array of radial intensities into 2D array of spacial intensities

I have a numpy array filled with intensity readings at different radii in a uniform circle (for context, this is a 1D radiative transfer project for protostellar formation models: while much better models exist, my supervisor wasnts me to have the experience of producing one so I understand how others work).
I want to take that 1d array, and "rotate" it through a circle, forming a 2D array of intensities that could then be shown with imshow (or, with a bit of work, aplpy). The final array needs to be 2d, and the projection needs to be Cartesian, not polar.
I can do it with nested for loops, and I can do it with lookup tables, but I have a feeling there must be a neat way of doing it in numpy or something.
Any ideas?
EDIT:
I have had to go back and recreate my (frankly horrible) mess of for loops and if statements that I had before. If I really tried, I could probably get rid of one of the loops and one of the if statements by condensing things down. However, the aim is not to make it work with for loops, but see if there is a built in way to rotate the array.
impB is an array that differs slightly from what I stated it was before. Its actually just a list of radii where particles are detected. I then bin those into radius bins to get the intensity (or frequency if you prefer) in each radius. R is the scale factor for my radius as I run the model in a dimensionless way. iRes is a resolution scale factor, essentially how often I want to sample my radial bins. Everything else should be clear.
radJ = np.ndarray(shape=(2*iRes, 2*iRes)) # Create array of 2xRadius square
for i in range(iRes):
n = len(impB[np.where(impB[:] < ((i+1.) * (R / iRes)))]) # Count number of things within this radius +1
m = len(impB[np.where(impB[:] <= ((i) * (R / iRes)))]) # Count number of things in this radius
a = (((i + 1) * (R / iRes))**2 - ((i) * (R / iRes))**2) * math.pi # A normalisation factor based on area.....dont ask
for x in range(iRes):
for y in range(iRes):
if (x**2 + y**2) < (i * iRes)**2:
if (x**2 + y**2) >= (i * iRes)**2: # Checks for radius, and puts in cartesian space
radJ[x+iRes,y+iRes] = (n-m) / a # Put in actual intensity bins
radJ[x+iRes,-y+iRes] = (n-m) / a
radJ[-x+iRes,y+iRes] = (n-m) / a
radJ[-x+iRes,-y+iRes] = (n-m) / a
Nested loops are a simple approach for that. With ri_data_r and y containing your radius values (difference to the middle pixel) and the array for rotation, respectively, I would suggest:
from scipy import interpolate
import numpy as np
y = np.random.rand(100)
ri_data_r = np.linspace(-len(y)/2,len(y)/2,len(y))
interpol_index = interpolate.interp1d(ri_data_r, y)
xv = np.arange(-1, 1, 0.01) # adjust your matrix values here
X, Y = np.meshgrid(xv, xv)
profilegrid = np.ones(X.shape, float)
for i, x in enumerate(X[0, :]):
for k, y in enumerate(Y[:, 0]):
current_radius = np.sqrt(x ** 2 + y ** 2)
profilegrid[i, k] = interpol_index(current_radius)
print(profilegrid)
This will give you exactly what you are looking for. You just have to take in your array and calculate an symmetric array ri_data_r that has the same length as your data array and contains the distance between the actual data and the middle of the array. The code is doing this automatically.
I stumbled upon this question in a different context and I hope I understood it right. Here are two other ways of doing this. The first uses skimage.transform.warp with interpolation of desired order (here we use order=0 Nearest-neighbor). This method is slower but more precise and needs less memory then the second method.
The second one does not use interpolation, therefore is faster but also less precise and needs way more memory because it stores each 2D array containing one tilt until the end, where they are averaged with np.nanmean().
The difference between both solutions stemmed from the problem of handling the center of the final image where the tilts overlap the most, i.e. the first one would just add values with each tilt ending up out of the original range. This was "solved" by clipping the matrix in each step to a global_min and global_max (consult the code). The second one solves it by taking the mean of the tilts where they overlap, which forces us to use the np.nan.
Please, read the Example of usage and Sanity check sections in order to understand the plot titles.
Solution 1:
import numpy as np
from skimage.transform import warp
def rotate_vector(vector, deg_angle):
# Credit goes to skimage.transform.radon
assert vector.ndim == 1, 'Pass only 1D vectors, e.g. use array.ravel()'
center = vector.size // 2
square = np.zeros((vector.size, vector.size))
square[center,:] = vector
rad_angle = np.deg2rad(deg_angle)
cos_a, sin_a = np.cos(rad_angle), np.sin(rad_angle)
R = np.array([[cos_a, sin_a, -center * (cos_a + sin_a - 1)],
[-sin_a, cos_a, -center * (cos_a - sin_a - 1)],
[0, 0, 1]])
# Approx. 80% of time is spent in this function
return warp(square, R, clip=False, output_shape=((vector.size, vector.size)))
def place_vectors(vectors, deg_angles):
matrix = np.zeros((vectors.shape[-1], vectors.shape[-1]))
global_min, global_max = 0, 0
for i, deg_angle in enumerate(deg_angles):
tilt = rotate_vector(vectors[i], deg_angle)
global_min = tilt.min() if global_min > tilt.min() else global_min
global_max = tilt.max() if global_max < tilt.max() else global_max
matrix += tilt
matrix = np.clip(matrix, global_min, global_max)
return matrix
Solution 2:
Credit for the idea goes to my colleague Michael Scherbela.
import numpy as np
def rotate_vector(vector, deg_angle):
assert vector.ndim == 1, 'Pass only 1D vectors, e.g. use array.ravel()'
square = np.ones([vector.size, vector.size]) * np.nan
radius = vector.size // 2
r_values = np.linspace(-radius, radius, vector.size)
rad_angle = np.deg2rad(deg_angle)
ind_x = np.round(np.cos(rad_angle) * r_values + vector.size/2).astype(np.int)
ind_y = np.round(np.sin(rad_angle) * r_values + vector.size/2).astype(np.int)
ind_x = np.clip(ind_x, 0, vector.size-1)
ind_y = np.clip(ind_y, 0, vector.size-1)
square[ind_y, ind_x] = vector
return square
def place_vectors(vectors, deg_angles):
matrices = []
for deg_angle, vector in zip(deg_angles, vectors):
matrices.append(rotate_vector(vector, deg_angle))
matrix = np.nanmean(np.array(matrices), axis=0)
return np.nan_to_num(matrix, copy=False, nan=0.0)
Example of usage:
r = 100 # Radius of the circle, i.e. half the length of the vector
n = int(np.pi * r / 8) # Number of vectors, e.g. number of tilts in tomography
v = np.ones(2*r) # One vector, e.g. one tilt in tomography
V = np.array([v]*n) # All vectors, e.g. a sinogram in tomography
# Rotate 1D vector to a specific angle (output is 2D)
angle = 45
rotated = rotate_vector(v, angle)
# Rotate each row of a 2D array according to its angle (output is 2D)
angles = np.linspace(-90, 90, num=n, endpoint=False)
inplace = place_vectors(V, angles)
Sanity check:
These are just simple checks which by no means cover all possible edge cases. Depending on your use case you might want to extend the checks and adjust the method.
# I. Sanity check
# Assuming n <= πr and v = np.ones(2r)
# Then sum(inplace) should be approx. equal to (n * (2πr - n)) / π
# which is an area that should be covered by the tilts
desired_area = (n * (2 * np.pi * r - n)) / np.pi
covered_area = np.sum(inplace)
covered_frac = covered_area / desired_area
print(f'This method covered {covered_frac * 100:.2f}% '
'of the area which should be covered in total.')
# II. Sanity check
# Assuming n <= πr and v = np.ones(2r)
# Then a circle M with radius m <= r should be the largest circle which
# is fully covered by the vectors. I.e. its mean should be no less than 1.
# If n = πr then m = r.
# m = n / π
m = int(n / np.pi)
# Code for circular mask not included
mask = create_circular_mask(2*r, 2*r, center=None, radius=m)
m_area = np.mean(inplace[mask])
print(f'Full radius r={r}, radius m={m}, mean(M)={m_area:.4f}.')
Code for plotting:
import matplotlib.pyplot as plt
plt.figure(figsize=(16, 8))
plt.subplot(121)
rotated = np.nan_to_num(rotated) # not necessary in case of the first method
plt.title(
f'Output of rotate_vector(), angle={angle}°\n'
f'Sum is {np.sum(rotated):.2f} and should be {np.sum(v):.2f}')
plt.imshow(rotated, cmap=plt.cm.Greys_r)
plt.subplot(122)
plt.title(
f'Output of place_vectors(), r={r}, n={n}\n'
f'Covered {covered_frac * 100:.2f}% of the area which should be covered.\n'
f'Mean of the circle M is {m_area:.4f} and should be 1.0.')
plt.imshow(inplace)
circle=plt.Circle((r, r), m, color='r', fill=False)
plt.gcf().gca().add_artist(circle)
plt.gcf().gca().legend([circle], [f'Circle M (m={m})'])

Second order gradient in numpy

I am trying to calculate the 2nd-order gradient numerically of an array in numpy.
a = np.sin(np.arange(0, 10, .01))
da = np.gradient(a)
dda = np.gradient(da)
This is what I come up. Is the the way it should be done?
I am asking this, because in numpy there isn't an option saying np.gradient(a, order=2). I am concerned about whether this usage is wrong, and that is why numpy does not have this implemented.
PS1: I do realize that there is np.diff(a, 2). But this is only single-sided estimation, so I was curious why np.gradient does not have a similar keyword.
PS2: The np.sin() is a toy data - the real data does not have an analytic form.
Thank you!
I'll second #jrennie's first sentence - it can all depend. The numpy.gradient function requires that the data be evenly spaced (although allows for different distances in each direction if multi-dimensional). If your data does not adhere to this, than numpy.gradient isn't going to be much use. Experimental data may have (OK, will have) noise on it, in addition to not necessarily being all evenly spaced. In this case it might be better to use one of the scipy.interpolate spline functions (or objects). These can take unevenly spaced data, allow for smoothing, and can return derivatives up to k-1 where k is the order of the spline fit requested. The default value for k is 3, so a second derivative is just fine.
Example:
spl = scipy.interpolate.splrep(x,y,k=3) # no smoothing, 3rd order spline
ddy = scipy.interpolate.splev(x,spl,der=2) # use those knots to get second derivative
The object oriented splines like scipy.interpolate.UnivariateSpline have methods for the derivatives. Note that the derivative methods are implemented in Scipy 0.13 and are not present in 0.12.
Note that, as pointed out by #JosephCottham in comments in 2018, this answer (good for Numpy 1.08 at least), is no longer applicable since (at least) Numpy 1.14. Check your version number and the available options for the call.
There's no universal right answer for numerical gradient calculation. Before you can calculate the gradient about sample data, you have to make some assumption about the underlying function that generated that data. You can technically use np.diff for gradient calculation. Using np.gradient is a reasonable approach. I don't see anything fundamentally wrong with what you are doing---it's one particular approximation of the 2nd derivative of a 1-D function.
The double gradient approach fails for discontinuities in the first derivative.
As the gradient function takes one data point to the left and to the right into account, this continues/spreads when applying it multiple times.
On the other hand side, the second derivative can be calculated by the formula
d^2 f(x[i]) / dx^2 = (f(x[i-1]) - 2*f(x[i]) + f(x[i+1])) / h^2
compare here. This has the advantage to just take the two neighboring pixels into account.
In the picture the double np.gradient approach (left) and the above mentioned formula (right), as implemented by np.diff are compared. As f(x) has only one kink at zero, the second derivative (green) should only there have a peak.
As the double gradient solution takes 2 neighboring points in each direction into account, this leads to finite second derivative values at +/- 1.
In some cases, however, you may want to prefer the double gradient solution, as this is more robust to noise.
I am not sure why there is np.gradient and np.diff, but a reason might be, that the second argument of np.gradient defines the pixel distance (for each dimension) and for images it can be applied for both dimensions simultaneously gy, gx = np.gradient(a).
Code
import numpy as np
import matplotlib.pyplot as plt
xs = np.arange(-5,6,1)
f = np.abs(xs)
f_x = np.gradient(f)
f_xx_bad = np.gradient(f_x)
f_xx_good = np.diff(f, 2)
test = f[:-2] - 2* f[1:-1] + f[2:]
# lets plot all this
fig, axs = plt.subplots(1, 2, figsize=(9, 3), sharey=True)
ax = axs[0]
ax.set_title('bad: double gradient')
ax.plot(xs, f, marker='o', label='f(x)')
ax.plot(xs, f_x, marker='o', label='d f(x) / dx')
ax.plot(xs, f_xx_bad, marker='o', label='d^2 f(x) / dx^2')
ax.legend()
ax = axs[1]
ax.set_title('good: diff with n=2')
ax.plot(xs, f, marker='o', label='f(x)')
ax.plot(xs, f_x, marker='o', label='d f(x) / dx')
ax.plot(xs[1:-1], f_xx_good, marker='o', label='d^2 f(x) / dx^2')
ax.plot(xs[1:-1], test, marker='o', label='test', markersize=1)
ax.legend()
As I keep stepping over this problem in one form or the other again and again, I decided to write a function gradient_n, which adds an differentiation oder functionality to np.gradient. Not all functionalities of np.gradient are supported, like differentiation of mutiple axis.
Like np.gradient, gradient_n returns the differentiated result in the same shape as the input. Also a pixel distance argument (d) is supported.
import numpy as np
def gradient_n(arr, n, d=1, axis=0):
"""Differentiate np.ndarray n times.
Similar to np.diff, but additional support of pixel distance d
and padding of the result to the same shape as arr.
If n is even: np.diff is applied and the result is zero-padded
If n is odd:
np.diff is applied n-1 times and zero-padded.
Then gradient is applied. This ensures the right output shape.
"""
n2 = int((n // 2) * 2)
diff = arr
if n2 > 0:
a0 = max(0, axis)
a1 = max(0, arr.ndim-axis-1)
diff = np.diff(arr, n2, axis=axis) / d**n2
diff = np.pad(diff, tuple([(0,0)]*a0 + [(1,1)] +[(0,0)]*a1),
'constant', constant_values=0)
if n > n2:
assert n-n2 == 1, 'n={:f}, n2={:f}'.format(n, n2)
diff = np.gradient(diff, d, axis=axis)
return diff
def test_gradient_n():
import matplotlib.pyplot as plt
x = np.linspace(-4, 4, 17)
y = np.linspace(-2, 2, 9)
X, Y = np.meshgrid(x, y)
arr = np.abs(X)
arr_x = np.gradient(arr, .5, axis=1)
arr_x2 = gradient_n(arr, 1, .5, axis=1)
arr_xx = np.diff(arr, 2, axis=1) / .5**2
arr_xx = np.pad(arr_xx, ((0, 0), (1, 1)), 'constant', constant_values=0)
arr_xx2 = gradient_n(arr, 2, .5, axis=1)
assert np.sum(arr_x - arr_x2) == 0
assert np.sum(arr_xx - arr_xx2) == 0
fig, axs = plt.subplots(2, 2, figsize=(29, 21))
axs = np.array(axs).flatten()
ax = axs[0]
ax.set_title('x-cut')
ax.plot(x, arr[0, :], marker='o', label='arr')
ax.plot(x, arr_x[0, :], marker='o', label='arr_x')
ax.plot(x, arr_x2[0, :], marker='x', label='arr_x2', ls='--')
ax.plot(x, arr_xx[0, :], marker='o', label='arr_xx')
ax.plot(x, arr_xx2[0, :], marker='x', label='arr_xx2', ls='--')
ax.legend()
ax = axs[1]
ax.set_title('arr')
im = ax.imshow(arr, cmap='bwr')
cbar = ax.figure.colorbar(im, ax=ax, pad=.05)
ax = axs[2]
ax.set_title('arr_x')
im = ax.imshow(arr_x, cmap='bwr')
cbar = ax.figure.colorbar(im, ax=ax, pad=.05)
ax = axs[3]
ax.set_title('arr_xx')
im = ax.imshow(arr_xx, cmap='bwr')
cbar = ax.figure.colorbar(im, ax=ax, pad=.05)
test_gradient_n()
This is an excerpt from the original documentation (at the time of writing found at http://docs.scipy.org/doc/numpy/reference/generated/numpy.gradient.html). It states that unless the sampling distance is 1 you need to include a list containing the distances as an argument.
numpy.gradient(f, *varargs, **kwargs)
Return the gradient of an N-dimensional array.
The gradient is computed using second order accurate central differences in the interior and either first differences or second order accurate one-sides (forward or backwards) differences at the boundaries. The returned gradient hence has the same shape as the input array.
Parameters:
f : array_like
An N-dimensional array containing samples of a scalar function.
varargs : list of scalar, optional
N scalars specifying the sample distances for each dimension, i.e. dx, dy, dz, ... Default distance: 1.
edge_order : {1, 2}, optional
Gradient is calculated using Nth order accurate differences at the boundaries. Default: 1.
New in version 1.9.1.
Returns:
gradient : ndarray
N arrays of the same shape as f giving the derivative of f with respect to each dimension.
My solution is to create a function similar to np.gradient that calculates the 2nd derivatives numerically from the array data.
import numpy as np
def gradient2_even(y, h=None, edge_order=1):
"""
Return the 2nd-order gradient i.e.
2nd derivatives of y with n samples and k components.
The 2nd-order gradient is computed using second-order-accurate central differences
in the interior points and either first or second order accurate one-sided
(forward or backwards) differences at the boundaries.
The returned gradient hence has the same shape as the input array.
Parameters
----------
y : 1d or 2d array_like
The array containing the samples. If 2d with shape (n,k),
n is the number of samples at least 2 while k is the number of
y series/components. 1d input is equivalent to 2d input with shape (n,1).
h : constant or 1d, optional
spacing between the y samples. Default unitary spacing for
all y components. Spacing can be specified using:
1. Single scalar spacing value for all y components.
2. 1d array_like of length k specifying the spacing for each y component
edge_order : {1, 2}, optional
Order 1 means 3-point forward/backward finite differences
are used to calculate the 2nd derivatves at the edge points while
order 2 uses 4-point forward/backward finite differences.
Returns
----------
d2y : 1d or 2d array
Array containing the 2nd derivatives. The output shape is the same as y.
"""
if edge_order!=1 and edge_order!=2:
raise ValueError('edge_order must be 1 or 2.')
else:
pass
y = np.asfarray(y)
origshape = y.shape
if y.ndim!=1 and y.ndim!=2:
raise ValueError('y can only be 1d or 2d.')
elif y.ndim==1:
y = np.atleast_2d(y).T
elif y.ndim==2:
if y.shape[0]<2:
raise ValueError('The number of y samples must be atleast 2.')
else:
pass
else:
pass
n,k = y.shape
if h is None:
h = 1.0
else:
h = np.asfarray(h)
if h.ndim!=0 and h.ndim!=1:
raise ValueError('h can only be 0d or 1d.')
elif h.ndim==0:
pass
elif h.ndim==1 and h.size!=n:
raise ValueError('If h is 1d, it must have the same number as the components of y.')
else:
pass
d2y = np.zeros_like(y)
if n==2:
pass
elif n==3:
d2y[:] = ( 1/h**2 * (y[0] - 2*y[1] + y[2]) )
else:
d2y = np.zeros_like(y)
d2y[1:-1]=1/h**2 * ( y[:-2] - 2*y[1:-1] + y[2:] )
if edge_order==1:
d2y[0]=1/h**2 * ( y[0] - 2*y[1] + y[2] )
d2y[-1]=1/h**2 * ( y[-1] - 2*y[-2] + y[-3] )
else:
d2y[0]=1/h**2 * ( 2*y[0] - 5*y[1] + 4*y[2] - y[3] )
d2y[-1]=1/h**2 * ( 2*y[-1] - 5*y[-2] + 4*y[-3] - y[-4] )
return d2y.reshape(origshape)
Using your example,
# After importing the function from the script file or running it
from numpy import *
from matplotlib.pyplot import *
x, h = linspace(0, 10, 17) # use a fairly coarse grid to see the discrepancies better
y = sin(x)
ypp = -sin(x) # analytical 2nd derivatives
# Compute numerically the 2nd derivatives using 2nd-order finite differences at the edge points
d2y = gradient2_even(y, h, 2)
# Compute numerically the 2nd derivatives using nested gradient function
d2y2 = gradient(gradient(y, h, edge_order=2), h, edge_order=2)
# Compute numerically the 2nd derivatives using 1st-order finite differences at the edge points
d2y3 = gradient2_even(y, h, 1)
fig,ax=subplots(1,1)
ax.plot(x, ypp, x, d2y, 'o', x, d2y2, 'o', x, d2y3, 'o'), ax.grid()
ax.legend(['Analytical', 'edge_order=2', 'nested gradient', 'edge_order=1'])
fig.tight_layout()

Digitizing an analog signal

I have a array of CSV values representing a digital output. It has been gathered using an analog oscilloscope so it is not a perfect digital signal. I'm trying to filter out the data to have a perfect digital signal for calculating the periods (which may vary).
I would also like to define the maximum error i get from this filtration.
Something like this:
Idea
Apply a treshold od the data. Here is a pseudocode:
for data_point_raw in data_array:
if data_point_raw < 0.8: data_point_perfect = LOW
if data_point_raw > 2 : data_point_perfect = HIGH
else:
#area between thresholds
if previous_data_point_perfect == Low : data_point_perfect = LOW
if previous_data_point_perfect == HIGH: data_point_perfect = HIGH
There are two problems bothering me.
This seems like a common problem in digital signal processing, however i haven't found a predefined standard function for it. Is this an ok way to perform the filtering?
How would I get the maximum error?
Here's a bit of code that might help.
from __future__ import division
import numpy as np
def find_transition_times(t, y, threshold):
"""
Given the input signal `y` with samples at times `t`,
find the times where `y` increases through the value `threshold`.
`t` and `y` must be 1-D numpy arrays.
Linear interpolation is used to estimate the time `t` between
samples at which the transitions occur.
"""
# Find where y crosses the threshold (increasing).
lower = y < threshold
higher = y >= threshold
transition_indices = np.where(lower[:-1] & higher[1:])[0]
# Linearly interpolate the time values where the transition occurs.
t0 = t[transition_indices]
t1 = t[transition_indices + 1]
y0 = y[transition_indices]
y1 = y[transition_indices + 1]
slope = (y1 - y0) / (t1 - t0)
transition_times = t0 + (threshold - y0) / slope
return transition_times
def periods(t, y, threshold):
"""
Given the input signal `y` with samples at times `t`,
find the time periods between the times at which the
signal `y` increases through the value `threshold`.
`t` and `y` must be 1-D numpy arrays.
"""
transition_times = find_transition_times(t, y, threshold)
deltas = np.diff(transition_times)
return deltas
if __name__ == "__main__":
import matplotlib.pyplot as plt
# Time samples
t = np.linspace(0, 50, 501)
# Use a noisy time to generate a noisy y.
tn = t + 0.05 * np.random.rand(t.size)
y = 0.6 * ( 1 + np.sin(tn) + (1./3) * np.sin(3*tn) + (1./5) * np.sin(5*tn) +
(1./7) * np.sin(7*tn) + (1./9) * np.sin(9*tn))
threshold = 0.5
deltas = periods(t, y, threshold)
print("Measured periods at threshold %g:" % threshold)
print(deltas)
print("Min: %.5g" % deltas.min())
print("Max: %.5g" % deltas.max())
print("Mean: %.5g" % deltas.mean())
print("Std dev: %.5g" % deltas.std())
trans_times = find_transition_times(t, y, threshold)
plt.plot(t, y)
plt.plot(trans_times, threshold * np.ones_like(trans_times), 'ro-')
plt.show()
The output:
Measured periods at threshold 0.5:
[ 6.29283207 6.29118893 6.27425846 6.29580066 6.28310224 6.30335003]
Min: 6.2743
Max: 6.3034
Mean: 6.2901
Std dev: 0.0092793
You could use numpy.histogram and/or matplotlib.pyplot.hist to further analyze the array returned by periods(t, y, threshold).
This is not an answer for your question, just and suggestion that may help. Im writing it here because i cant put image in comment.
I think you should normalize data somehow, before any processing.
After normalization to range of 0...1 you should apply your filter.
If you're really only interested in the period, you could plot the Fourier Transform, you'll have a peak where the frequency of the signals occurs (and so you have the period). The wider the peak in the Fourier domain, the larger the error in your period measurement
import numpy as np
data = np.asarray(my_data)
np.fft.fft(data)
Your filtering is fine, it's basically the same as a schmitt trigger, but the main problem you might have with it is speed. The benefit of using Numpy is that it can be as fast as C, whereas you have to iterate once over each element.
You can achieve something similar using the median filter from SciPy. The following should achieve a similar result (and not be dependent on any magnitudes):
filtered = scipy.signal.medfilt(raw)
filtered = numpy.where(filtered > numpy.mean(filtered), 1, 0)
You can tune the strength of the median filtering with medfilt(raw, n_samples), n_samples defaults to 3.
As for the error, that's going to be very subjective. One way would be to discretise the signal without filtering and then compare for differences. For example:
discrete = numpy.where(raw > numpy.mean(raw), 1, 0)
errors = np.count_nonzero(filtered != discrete)
error_rate = errors / len(discrete)

Categories