I've been trying to plot the Bates distribution curve, The Bates distribution is the distribution of the mean of n independent standard uniform variates (from 0 to 1).
(I worked on the interval [-1;1], I made a simple change of variable).
The curve destabilizes after such number of n, which prevents me from moving forward.
In order to consider that the variable x is continuous, I sampled interval in 10**6 samples. Here are some examples for different n:
But for n greater than 29, the curve diverges, and the greater n, the closer the deformation caused by the divergence is to the (mean) center of the curve:
The Bates distribution of probability is defined as follows:
My code:
samples=10**6
def combinaison(n,k): # combination of K out of N
cnk=fac(n)/(fac(k)*fac(abs(n-k))) # fac is factoriel
return cnk
def dens_probas(a,b,n):
x=np.linspace(a, b, num=samples)
y=(x-a)/(b-a)
F=list()
for i in range(0,len(y)):
g=0
for k in range(0,int(n*y[i]+1)):
g=g+pow(-1,k)*combinaison(n,k)*pow(y[i]-k/n,n-1)
d=(n**n/fac(n-1))*g
F.append(d)
return F
Any idea to correct the divergence for larger n?
The main problem is that the formula with alternating sums is extremely prone to numerical accuracy issues.
One trick to avoid the problems on the right side, is to assume the distribution is symmetric and only calculate half of it.
A straightforward accuracy optimization is to replace the factorials in the formula for combinaison by a call to scipy.special.comb. This avoids that very large numbers need to be divided.
A smaller accuracy optimization is to calculate g for even and odd numbers together. But at first sight the formula can not be reduced much, so replacing:
for k in range(0, int(floor(n * y[i] + 1))):
g += pow(-1, k) * combinaison(n, k) * pow(y[i] - k / n, n - 1)
By:
last_k = int(floor(n * y[i]))
for k in range(0, last_k + 1, 2): # note that k increments in steps of 2
if k == last_k:
g += combinaison(n, k) * (pow(y[i] - k / n, n - 1))
else:
g += combinaison(n, k) * (pow(y[i] - k / n, n - 1) - pow(y[i] - (k + 1)/ n, n - 1) * (n - k) / (k + 1))
Some other remarks:
The variable samples is only used to tell the division in the xaxis. A much smaller number will suffice. (In the code below I renamed the variable to xaxis_steps).
Using append for F will be extremely slow. It is better to create a numpy array of the correct size and then fill it in. (This also makes the copying of the halves easier.)
from matplotlib import pyplot as plt
import numpy as np
from scipy.special import comb
from math import factorial as fac
from math import floor
xaxis_steps = 500
def combinaison(n, k): # combination of K out of N
return comb(n, k)
def dens_probas(a, b, n):
x = np.linspace(a, b, num=xaxis_steps)
y = (x - a) / (b - a)
F = np.zeros_like(y)
for i in range(0, (len(y)+1) // 2):
g = 0
for k in range(0, int(floor(n * y[i] + 1))):
g += pow(-1, k) * combinaison(n, k) * pow(y[i] - k / n, n - 1)
F[i] = (n ** n / fac(n - 1)) * g
F[-i-1] = F[i] # symmetric graph
plt.plot(x, F, label=f'n={n}')
return F
for n in (5, 30, 50, 80, 90):
dens_probas(-1, 1, n)
plt.legend()
plt.show()
All these optimizations together move the accuracy problem from n=30 to around n=80:
A completely different approach would be to generate a lot of uniform samples and take the means. From those samples a kde plot can be generated. The smoothness of such curves depends on the number of samples. A kde can be plotted directly via seaborn's kdeplot. You can also separately calculate the kde function, then apply it to a given x range and plot it via standard matplotlib.
import numpy as np
from matplotlib import pyplot as plt
from scipy.stats import gaussian_kde
num_samples = 10 ** 5
def dens_probas(a, b, n):
samples = np.random.uniform(a, b, size=(num_samples, n)).mean(axis=1)
samples = np.hstack([samples, a + b - samples]) # force symmetry; this is not strictly necessary
return gaussian_kde(samples)
for n in (5, 30, 50, 80, 90, 200):
kde = dens_probas(-1, 1, n)
xs = np.linspace(-1, 1, 1000)
F = kde(xs)
plt.plot(xs, F, label=f'n={n}')
plt.legend()
plt.show()
Related
I do have a function, for example , but this can be something else as well, like a quadratic or logarithmic function. I am only interested in the domain of . The parameters of the function (a and k in this case) are known as well.
My goal is to fit a continuous piece-wise function to this, which contains alternating segments of linear functions (i.e. sloped straight segments, each with intercept of 0) and constants (i.e. horizontal segments joining the sloped segments together). The first and last segments are both sloped. And the number of segments should be pre-selected between around 9-29 (that is 5-15 linear steps + 4-14 constant plateaus).
Formally
The input function:
The fitted piecewise function:
I am looking for the optimal resulting parameters (c,r,b) (in terms of least squares) if the segment numbers (n) are specified beforehand.
The resulting constants (c) and the breakpoints (r) should be whole natural numbers, and the slopes (b) round two decimal point values.
I have tried to do the fitting numerically using the pwlf package using a segmented constant models, and further processed the resulting constant model with some graphical intuition to "slice" the constant steps with the slopes. It works to some extent, but I am sure this is suboptimal from both fitting perspective and computational efficiency. It takes multiple minutes to generate a fitting with 8 slopes on the range of 1-50000. I am sure there must be a better way to do this.
My idea would be to instead using only numerical methods/ML, the fact that we have the algebraic form of the input function could be exploited in some way to at least to use algebraic transforms (integrals) to get to a simpler optimization problem.
import numpy as np
import matplotlib.pyplot as plt
import pwlf
# The input function
def input_func(x,k,a):
return np.power(x,1/a)*k
x = np.arange(1,5e4)
y = input_func(x, 1.8, 1.3)
plt.plot(x,y);
def pw_fit(func, x_r, no_seg, *fparams):
# working on the specified range
x = np.arange(1,x_r)
y_input = func(x, *fparams)
my_pwlf = pwlf.PiecewiseLinFit(x, y_input, degree=0)
res = my_pwlf.fit(no_seg)
yHat = my_pwlf.predict(x)
# Function values at the breakpoints
y_isec = func(res, *fparams)
# Slope values at the breakpoints
slopes = np.round(y_isec / res, decimals=2)
slopes = slopes[1:]
# For the first slope value, I use the intersection of the first constant plateau and the input function
slopes = np.insert(slopes,0,np.round(y_input[np.argwhere(np.diff(np.sign(y_input - yHat))).flatten()[0]] / np.argwhere(np.diff(np.sign(y_input - yHat))).flatten()[0], decimals=2))
plateaus = np.unique(np.round(yHat))
# If due to rounding slope values (to two decimals), there is no change in a subsequent step, I just remove those segments
to_del = np.argwhere(np.diff(slopes) == 0).flatten()
slopes = np.delete(slopes,to_del + 1)
plateaus = np.delete(plateaus,to_del)
breakpoints = [np.ceil(plateaus[0]/slopes[0])]
for idx, j in enumerate(slopes[1:-1]):
breakpoints.append(np.floor(plateaus[idx]/j))
breakpoints.append(np.ceil(plateaus[idx+1]/j))
breakpoints.append(np.floor(plateaus[-1]/slopes[-1]))
return slopes, plateaus, breakpoints
slo, plat, breaks = pw_fit(input_func, 50000, 8, 1.8, 1.3)
# The piecewise function itself
def pw_calc(x, slopes, plateaus, breaks):
x = x.astype('float')
cond_list = [x < breaks[0]]
for idx, j in enumerate(breaks[:-1]):
cond_list.append((j <= x) & (x < breaks[idx+1]))
cond_list.append(breaks[-1] <= x)
func_list = [lambda x: x * slopes[0]]
for idx, j in enumerate(slopes[1:]):
func_list.append(plateaus[idx])
func_list.append(lambda x, j=j: x * j)
return np.piecewise(x, cond_list, func_list)
y_output = pw_calc(x, slo, plat, breaks)
plt.plot(x,y,y_output);
(Not important, but I think the fitted piecewise function is not continuous as it is. Intervals should be x<=r1; r1<x<=r2; ....)
As Anatolyg has pointed out, it looks to me that in the optimal solution (for the function posted at least, and probably for any where the derivative is different from zero), the horizantal segments will collapse to a point or the minimum segment length (in this case 1).
EDIT---------------------------------------------
The behavior above could only be valid if the slopes could have an intercept. If the intercepts are zero, as posted in the question, one consideration must be taken into account: Is the initial parabolic function defined in zero or nearby? Imagine the function y=0.001 *sqrt(x-1000), then the segments defined as b*x will have a slope close to zero and will be so similar to the constant segments that the best fit will be just the line that without intercept that fits better all the function.
Provided that the function is defined in zero or nearby, you can start by approximating the curve just by linear segments (with intercepts):
divide the function domain in N intervals(equal intervals or whose size is a function of the average curvature (or second derivative) of the function along the domain).
linear fit/regression in each intervals
for each interval, if a point (or bunch of points) in the extreme of any interval is better fitted by the line of the neighbor interval than the line of its interval, this point is assigned to the neighbor interval.
Repeat from 2) until no extreme points are moved.
Linear regressions might be optimized not to calculate all the covariance matrixes from scratch on each iteration, but just adding the contributions of the moved points to the previous covariance matrixes.
Then each linear segment (LSi) is replaced by a combination of a small constant segment at the beginning (Cbi), a linear segment without intercept (Si), and another constant segment at the end (Cei). This segments are easy to calculate as Si will contain the middle point of LSi, and Cbi and Cei will have respectively the begin and end values of the segment LSi. Then the intervals of each segment has to be calculated as an intersection between lines.
With this, the constant end segment will be collinear with the constant begin segment from the next interval so they will merge, resulting in a series of constant and linear segments interleaved.
But this would be a floating point start solution. Next, you will have to apply all the roundings which will mess up quite a lot all the segments as the conditions integer intervals and linear segments without slope can be very confronting. In fact, b,c,r are not totally independent. If ci and ri+1 are known, then bi+1 is already fixed
If nothing is broken so far, the final task will be to minimize the error/cost function (I assume that it will be the integral of the error between the parabolic function and the segments). My guess is that gradients here will be quite a pain, as if you change for example one ci, all the rest of the bj and cj will have to adapt as well due to the integer intervals restriction. However, if you can generalize the derivatives between parameters ( how much do I have to adapt bi+1 if ci changes a unit), you can propagate the change of one parameter to all other parameters and have kind of a gradient. Then for each interval, you can estimate what would be the ideal parameter and averaging all intervals calculate the best gradient step. Let me illustrate this:
Assuming first that r parameters are fixed, if I change c1 by one unit, b2 changes by 0.1, c2 changes by -0.2 and b3 changes by 0.2. This would be the gradient.
Then I estimate, comparing with the parabolic curve, that c1 should increase 0.5 (to reduce the cost by 10 points), b2 should increase 0.2 (to reduce the cost by 5 points), c2 should increase 0.2 (to reduce the cost by 6 points) and b3 should increase 0.1 (to reduce the cost by 9 points).
Finally, the gradient step would be (0.5/1·10 + 0.2/0.1·5 - 0.2/(-0.2)·6 + 0.1/0.2·9)/(10 + 5 + 6 + 9)~= 0.45. Thus, c1 would increase 0.45 units, b2 would increase 0.45·0.1, and so on.
When you add the r parameters to the pot, as integer intervals do not have an proper derivative, calculation is not straightforward. However, you can consider r parameters as floating points, calculate and apply the gradient step and then apply the roundings.
We can integrate the squared error function for linear and constant pieces and let SciPy optimize it. Python 3:
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize
xl = 1
xh = 50000
a = 1.3
p = 1 / a
n = 8
def split_b_and_c(bc):
return bc[::2], bc[1::2]
def solve_for_r(b, c):
r = np.empty(2 * n)
r[0] = xl
r[1:-1:2] = c / b[:-1]
r[2::2] = c / b[1:]
r[-1] = xh
return r
def linear_residual_integral(b, x):
return (
(x ** (2 * p + 1)) / (2 * p + 1)
- 2 * b * x ** (p + 2) / (p + 2)
+ b ** 2 * x ** 3 / 3
)
def constant_residual_integral(c, x):
return x ** (2 * p + 1) / (2 * p + 1) - 2 * c * x ** (p + 1) / (p + 1) + c ** 2 * x
def squared_error(bc):
b, c = split_b_and_c(bc)
r = solve_for_r(b, c)
linear = np.sum(
linear_residual_integral(b, r[1::2]) - linear_residual_integral(b, r[::2])
)
constant = np.sum(
constant_residual_integral(c, r[2::2])
- constant_residual_integral(c, r[1:-1:2])
)
return linear + constant
def evaluate(x, b, c, r):
i = 0
while x > r[i + 1]:
i += 1
return b[i // 2] * x if i % 2 == 0 else c[i // 2]
def main():
bc0 = (xl + (xh - xl) * np.arange(1, 4 * n - 2, 2) / (4 * n - 2)) ** (
p - 1 + np.arange(2 * n - 1) % 2
)
bc = scipy.optimize.minimize(
squared_error, bc0, bounds=[(1e-06, None) for i in range(2 * n - 1)]
).x
b, c = split_b_and_c(bc)
r = solve_for_r(b, c)
X = np.linspace(xl, xh, 1000)
Y = [evaluate(x, b, c, r) for x in X]
plt.plot(X, X ** p)
plt.plot(X, Y)
plt.show()
if __name__ == "__main__":
main()
I have tried to come up with a new solution myself, based on the idea of #Amo Robb, where I have partitioned the domain, and curve fitted a dual - constant and linear - piece together (with the help of np.maximum). I have used the 1 / f(x)' as the function to designate the breakpoints, but I know this is arbitrary and does not provide a global optimum. Maybe there is some optimal function for these breakpoints. But this solution is OK for me, as it might be appropriate to have a better fit at the first segments, at the expense of the error for the later segments. (The task itself is actually a cost based retail margin calculation {supply price -> added margin}, as the retail POS software can only work with such piecewise margin function).
The answer from #David Eisenstat is correct optimal solution if the parameters are allowed to be floats. Unfortunately the POS software can not use floats. It is OK to round up c-s and r-s afterwards. But the b-s should be rounded to two decimals, as those are inputted as percents, and this constraint would ruin the optimal solution with long floats. I will try to further improve my solution with both Amo's and David's valuable input. Thank You for that!
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
# The input function f(x)
def input_func(x,k,a):
return np.power(x,1/a) * k
# 1 / f(x)'
def one_per_der(x,k,a):
return a / (k * np.power(x, 1/a-1))
# 1 / f(x)' inverted
def one_per_der_inv(x,k,a):
return np.power(a / (x*k), a / (1-a))
def segment_fit(start,end,y,first_val):
b, _ = curve_fit(lambda x,b: np.maximum(first_val, b*x), np.arange(start,end), y[start-1:end-1])
b = float(np.round(b, decimals=2))
bp = np.round(first_val / b)
last_val = np.round(b * end)
return b, bp, last_val
def pw_fit(end_range, no_seg, **fparams):
y_bps = np.linspace(one_per_der(1, **fparams), one_per_der(end_range,**fparams) , no_seg+1)[1:]
x_bps = np.round(one_per_der_inv(y_bps, **fparams))
y = input_func(x, **fparams)
slopes = [np.round(float(curve_fit(lambda x,b: x * b, np.arange(1,x_bps[0]), y[:int(x_bps[0])-1])[0]), decimals = 2)]
plats = [np.round(x_bps[0] * slopes[0])]
bps = []
for i, xbp in enumerate(x_bps[1:]):
b, bp, last_val = segment_fit(int(x_bps[i]+1), int(xbp), y, plats[i])
slopes.append(b); bps.append(bp); plats.append(last_val)
breaks = sorted(list(x_bps) + bps)[:-1]
# If due to rounding slope values (to two decimals), there is no change in a subsequent step, I just remove those segments
to_del = np.argwhere(np.diff(slopes) == 0).flatten()
breaks_to_del = np.concatenate((to_del * 2, to_del * 2 + 1))
slopes = np.delete(slopes,to_del + 1)
plats = np.delete(plats[:-1],to_del)
breaks = np.delete(breaks,breaks_to_del)
return slopes, plats, breaks
def pw_calc(x, slopes, plateaus, breaks):
x = x.astype('float')
cond_list = [x < breaks[0]]
for idx, j in enumerate(breaks[:-1]):
cond_list.append((j <= x) & (x < breaks[idx+1]))
cond_list.append(breaks[-1] <= x)
func_list = [lambda x: x * slopes[0]]
for idx, j in enumerate(slopes[1:]):
func_list.append(plateaus[idx])
func_list.append(lambda x, j=j: x * j)
return np.piecewise(x, cond_list, func_list)
fparams = {'k':1.8, 'a':1.2}
end_range = 5e4
no_steps = 10
x = np.arange(1, end_range)
y = input_func(x, **fparams)
slopes, plats, breaks = pw_fit(end_range, no_steps, **fparams)
y_output = pw_calc(x, slopes, plats, breaks)
plt.plot(x,y_output,y);
I wanted to calculate the normalized cross-correlation function of two signals where "x" axes is the time delay and "y" axes is value of correlation between -1 and 1. so I decided to use scipy.
I use the command corr = signal.correlate(s1['Strain'], s2['Strain'], mode='full')
where s1['Strain'] and s2['Strain'] are the pandas dataframe values but it doesn't return the normalized function with "x" axes as time delay.
Here is example data
s1:
Strain
0 -1.587702e-22
1 -1.425868e-22
2 -1.174897e-22
3 -8.559119e-23
4 -4.949480e-23
. .
. .
. .
for s2 it looks similar. I knew the sampling of both datasets, it's 4096 kHz.
Thank for your help.
First of all to get normalized coefficient (such that as lag 0, we get the Pearson correlation):
divide both signals by their standard deviation
scale by the length of the signal over which the convolution is done (shortest signal)
out = correlate(x/np.std(x), y/np.std(y), 'full') / min(len(x), len(y))
Now for the lags, from the official documentation of correlate one can read that the full output of cross-correlation is given by:
z[k] = (x * y)(k - N + 1)
= \sum_{l=0}^{||x||-1}x_l y_{l-k+N-1}^{*}\]
Where * denotes the convolution, and k goes from 0 up to ||x|| + ||y|| - 2 precisely. N is max(len(x), len(y)).
The lags are denoted above as the argument of the convolution (x * y), so they range from 0 - N + 1 to ||x|| + ||y|| - 2 - N + 1 which is n - 1 with n=min(len(x), len(y)).
Also, by briefly looking at the source code, I think they swap x and y sometimes if convenient... (hence the min(len(x), len(y)) in the normalisation above. However this implies to change the start of our lags, therefore:
N = max(len(x), len(y))
n = min(len(x), len(y))
# if len(x) < (len(y):
lags = np.arange(-N + 1, n)
# else:
lags = np.arange(-n + 1, N)
Summary
Check this code on two time-series for which you want to plot the cross-correlation of:
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import correlate
def plot_xcorr(x, y):
"Plot cross-correlation (full) between two signals."
N = max(len(x), len(y))
n = min(len(x), len(y))
if N == len(y):
lags = np.arange(-N + 1, n)
else:
lags = np.arange(-n + 1, N)
c = correlate(x / np.std(x), y / np.std(y), 'full')
plt.plot(lags, c / n)
plt.show()
To calculate the time delay between two signals, we need to find the cross-correlation between two signals and find the argmax.
Assuming data_1 and data_2 are samples of two signals:
import numpy as np
import pandas as pd
correlation = np.correlate(data_1, data_2, mode='same')
delay = np.argmax(correlation) - int(len(correlation)/2)
I'm trying to fit a power-law to data which is in the double log scale. Therefore I've used the curve_fit(...) function from the scipy.optimize package.
To run the function I've implemented the following piece of code COR_coef[i] = curve_fit(lambda x, m: c * x ** m, x, COR_IFG[:, i])[0][0], to the best of my knowledge the curve_fit(...) should now correctly fit a power-law (being a straight line) to my data. However, for some reason, I just do not seem to get the fit right. See the attached picture for the data and its fit.
Some more context with regards to the minimum reproducible example (see below):
The code generates random noise for simulation purposes, this is done in the white_noise(...)
This random noise is than misaligned (in a for-loop with different fractions of misalignment according to the variable fractions_to_shift so the development of the power-law can be studied) and subtracted from the original noise to gain a residual signal
The residual signal is the signal the power-law is fitted to
The curve_fit(...) is applied in the sim_powerlaw_coefficient(...) function
I am aware of the fact that my residual signal shows some artifacts when the misalignment gets larger, unfortunately I don't know how to get rid of these artifacts.
MINIMUM REPRODUCIBLE EXAMPLE
import matplotlib.pyplot as plt
import numpy as np
import numpy.fft as fft
import numpy.random as rnd
from scipy.optimize import curve_fit
plt.style.use('seaborn-darkgrid')
rnd.seed(100) # to select a random seed for creating the "random" noise
grad = -5 / 3. # slope to use for every function
c = 1 # base parameter for the powerlaw
ylim = [1e-7, 30] # range for the double log plots of the powerfrequency domains
values_to_shift = [0, 2**-11, 2**-10, 2**-9, 2**-8, 2**-7, 2**-6, 2**-5, 2**-4, 2**-3, 2**-2, 2**-1, 2**0] # fractions of missalignment
def white_noise(n: int, N: int):
"""
- Creates a data set of white noise with size n, N;
- Filters this dataset with the corresponding slope;
This slope is usually equal to -5/3 or -2/3
- Makes sure the slope is equal to the requested slope in the double log scale.
#param n: size of random array
#param N: number of random arrays
#param slope: slope of the gradient
#return: white_noise, filtered white_noise and the original signal
"""
m = grad
x = np.linspace(1, n, n // 2)
slope_loglog = c * x ** m
whitenoise = rnd.randn(n // 2, N) + 1j * rnd.randn(n // 2, N)
whitenoise[0, :] = 0 # zero-mean noise
whitenoise_filtered = whitenoise * slope_loglog[:, np.newaxis]
whitenoise = 2 * np.pi * np.concatenate((whitenoise, whitenoise[0:1, :], np.conj(whitenoise[-1:0:-1, :])), axis=0)
whitenoise_filtered = 2 * np.pi * np.concatenate(
(whitenoise_filtered, whitenoise_filtered[0:1, :], np.conj(whitenoise_filtered[-1:0:-1, :])), axis=0)
whitenoise_signal = fft.ifft(whitenoise_filtered, axis=0)
whitenoise_signal = np.real_if_close(whitenoise_signal)
if np.iscomplex(whitenoise_signal).any():
print('Warning! whitenoise_signal is complex-valued!')
whitenoise_retransformed = fft.fft(whitenoise_signal, axis=0)
return whitenoise, whitenoise_filtered, whitenoise_signal, whitenoise_retransformed, slope_loglog
def sim_powerlaw_coefficient(n: int, N: int, show_powerlaw=0):
"""
#param n: Number of values in the IFG
#param N: Number of IFG's
#return: Returns the coefficient after subtraction of two IFG's
"""
master = white_noise(n, N)
slave = white_noise(n, N)
x = np.linspace(1, n, n // 2)
signal_IFG = master[2] - slave[2]
noise_IFG = np.abs(fft.fft(signal_IFG, axis=0))[0:n // 2, :]
for k in range(len(values_to_shift)):
shift = np.int(np.round(values_to_shift[k] * n, 0))
inp = signal_IFG.copy()
# the weather model is a shifted copy of the actual signal, to better understand the errors that are introduced.
weather_model = np.roll(inp, shift, axis=0)
WM_IFG = np.abs(fft.fft(weather_model, axis=0)[0:n // 2, :])
signal_corrected = signal_IFG - weather_model
COR_IFG = np.abs(fft.fft(signal_corrected, axis=0)[0:n // 2, :])
COR_coef = np.zeros(N)
for i in range(N):
COR_coef[i] = curve_fit(lambda x, m: c * x ** m, x, COR_IFG[:, i])[0][0]
plt.figure(figsize=(15, 10))
plt.title('Corrected IFG (combined - weather model)')
plt.loglog(COR_IFG, label='Corrected IFG')
plt.ylim(ylim)
plt.xlabel('log(k)')
plt.ylabel('log(P)')
plt.loglog(c * x ** COR_coef.mean(), '-.', label=f'COR powerlaw coef:{COR_coef.mean()}')
plt.legend(loc=0)
plt.tight_layout()
sim_powerlaw_coefficient(8192, 1, show_powerlaw=1)
I need to plot the following function using Python, numpy and matplotlib:
for the values of N = 5, 20 and 60.
I've created a list of odd numbers using:
def odd(n):
nums = []
for i in range(1, 2*n, 2):
nums.append(i)
return nums
But I don't know how to use this in a sigma function because I need to vary my x values and sum over the function for the range of odd(n).
If you want to plot (i.e. visualise) the function for some N, then the procedure is as follows:
Generate an array of x values. In this case, ranging from -pi to pi makes most sense.
Write a loop that computes one sin() at a time, and sum the result in a different array, which we call Psi.
Finally multiply the Psi by the constant 2/(N+1).
Plot the result
import numpy as np
import matplotlib.pyplot as plt
# x is 100 equally spaced points from -pi to pi, inclusive
x = np.linspace(-np.pi, np.pi, 100)
Psi = 0*x # now Psi is an array of zeros
N = 60
# second input of range is N+1 since our index n satisfies 1 <= n < N+1
# third input makes n increment by 2 each loop instead of the default 1
for n in range(1, N+1, 2):
Psi += -1**((n-1)/2) * np.sin(n*x)
Psi *= 2/(N+1)
plt.plot(x, Psi)
Code without pure Python loops:
def Psi(x, N=7):
"""Note: N should be odd """
_s = np.arange(1, int((N + 1) / 2) + 1)
return 2 * np.sum(np.where(_s % 2, 1, -1) * np.sin((2 * _s - 1) * x)) / (N + 1)
This code is without loops and should work for any value of x and N.
x must be an array or list with more than 1 element
import numpy as np
from numpy import matlib
import matplotlib.pyplot as plt
def psi(x,N):
n=np.arange(0,N,2)+1
sigma = matlib.repmat((-1)**((n-1)/2),len(x),1).T*np.sin(matlib.repmat(n,len(x),1).T*x)
PSI = (2/(N+1))*np.sum(sigma,axis=0)
return PSI
x=np.linspace(0,2*np.pi,50)
N=5
y = psi(x,N)
plt.plot(y)
Short summary: How do I quickly calculate the finite convolution of two arrays?
Problem description
I am trying to obtain the finite convolution of two functions f(x), g(x) defined by
To achieve this, I have taken discrete samples of the functions and turned them into arrays of length steps:
xarray = [x * i / steps for i in range(steps)]
farray = [f(x) for x in xarray]
garray = [g(x) for x in xarray]
I then tried to calculate the convolution using the scipy.signal.convolve function. This function gives the same results as the algorithm conv suggested here. However, the results differ considerably from analytical solutions. Modifying the algorithm conv to use the trapezoidal rule gives the desired results.
To illustrate this, I let
f(x) = exp(-x)
g(x) = 2 * exp(-2 * x)
the results are:
Here Riemann represents a simple Riemann sum, trapezoidal is a modified version of the Riemann algorithm to use the trapezoidal rule, scipy.signal.convolve is the scipy function and analytical is the analytical convolution.
Now let g(x) = x^2 * exp(-x) and the results become:
Here 'ratio' is the ratio of the values obtained from scipy to the analytical values. The above demonstrates that the problem cannot be solved by renormalising the integral.
The question
Is it possible to use the speed of scipy but retain the better results of a trapezoidal rule or do I have to write a C extension to achieve the desired results?
An example
Just copy and paste the code below to see the problem I am encountering. The two results can be brought to closer agreement by increasing the steps variable. I believe that the problem is due to artefacts from right hand Riemann sums because the integral is overestimated when it is increasing and approaches the analytical solution again as it is decreasing.
EDIT: I have now included the original algorithm 2 as a comparison which gives the same results as the scipy.signal.convolve function.
import numpy as np
import scipy.signal as signal
import matplotlib.pyplot as plt
import math
def convolveoriginal(x, y):
'''
The original algorithm from http://www.physics.rutgers.edu/~masud/computing/WPark_recipes_in_python.html.
'''
P, Q, N = len(x), len(y), len(x) + len(y) - 1
z = []
for k in range(N):
t, lower, upper = 0, max(0, k - (Q - 1)), min(P - 1, k)
for i in range(lower, upper + 1):
t = t + x[i] * y[k - i]
z.append(t)
return np.array(z) #Modified to include conversion to numpy array
def convolve(y1, y2, dx = None):
'''
Compute the finite convolution of two signals of equal length.
#param y1: First signal.
#param y2: Second signal.
#param dx: [optional] Integration step width.
#note: Based on the algorithm at http://www.physics.rutgers.edu/~masud/computing/WPark_recipes_in_python.html.
'''
P = len(y1) #Determine the length of the signal
z = [] #Create a list of convolution values
for k in range(P):
t = 0
lower = max(0, k - (P - 1))
upper = min(P - 1, k)
for i in range(lower, upper):
t += (y1[i] * y2[k - i] + y1[i + 1] * y2[k - (i + 1)]) / 2
z.append(t)
z = np.array(z) #Convert to a numpy array
if dx != None: #Is a step width specified?
z *= dx
return z
steps = 50 #Number of integration steps
maxtime = 5 #Maximum time
dt = float(maxtime) / steps #Obtain the width of a time step
time = [dt * i for i in range (steps)] #Create an array of times
exp1 = [math.exp(-t) for t in time] #Create an array of function values
exp2 = [2 * math.exp(-2 * t) for t in time]
#Calculate the analytical expression
analytical = [2 * math.exp(-2 * t) * (-1 + math.exp(t)) for t in time]
#Calculate the trapezoidal convolution
trapezoidal = convolve(exp1, exp2, dt)
#Calculate the scipy convolution
sci = signal.convolve(exp1, exp2, mode = 'full')
#Slice the first half to obtain the causal convolution and multiply by dt
#to account for the step width
sci = sci[0:steps] * dt
#Calculate the convolution using the original Riemann sum algorithm
riemann = convolveoriginal(exp1, exp2)
riemann = riemann[0:steps] * dt
#Plot
plt.plot(time, analytical, label = 'analytical')
plt.plot(time, trapezoidal, 'o', label = 'trapezoidal')
plt.plot(time, riemann, 'o', label = 'Riemann')
plt.plot(time, sci, '.', label = 'scipy.signal.convolve')
plt.legend()
plt.show()
Thank you for your time!
or, for those who prefer numpy to C. It will be slower than the C implementation, but it's just a few lines.
>>> t = np.linspace(0, maxtime-dt, 50)
>>> fx = np.exp(-np.array(t))
>>> gx = 2*np.exp(-2*np.array(t))
>>> analytical = 2 * np.exp(-2 * t) * (-1 + np.exp(t))
this looks like trapezoidal in this case (but I didn't check the math)
>>> s2a = signal.convolve(fx[1:], gx, 'full')*dt
>>> s2b = signal.convolve(fx, gx[1:], 'full')*dt
>>> s = (s2a+s2b)/2
>>> s[:10]
array([ 0.17235682, 0.29706872, 0.38433313, 0.44235042, 0.47770012,
0.49564748, 0.50039326, 0.49527721, 0.48294359, 0.46547582])
>>> analytical[:10]
array([ 0. , 0.17221333, 0.29682141, 0.38401317, 0.44198216,
0.47730244, 0.49523485, 0.49997668, 0.49486489, 0.48254154])
largest absolute error:
>>> np.max(np.abs(s[:len(analytical)-1] - analytical[1:]))
0.00041657780840698155
>>> np.argmax(np.abs(s[:len(analytical)-1] - analytical[1:]))
6
Short answer: Write it in C!
Long answer
Using the cookbook about numpy arrays I rewrote the trapezoidal convolution method in C. In order to use the C code one requires three files (https://gist.github.com/1626919)
The C code (performancemodule.c).
The setup file to build the code and make it callable from python (performancemodulesetup.py).
The python file that makes use of the C extension (performancetest.py)
The code should run upon downloading by doing the following
Adjust the include path in performancemodule.c.
Run the following
python performancemodulesetup.py build
python performancetest.py
You may have to copy the library file performancemodule.so or performancemodule.dll into the same directory as performancetest.py.
Results and performance
The results agree neatly with one another as shown below:
The performance of the C method is even better than scipy's convolve method. Running 10k convolutions with array length 50 requires
convolve (seconds, microseconds) 81 349969
scipy.signal.convolve (seconds, microseconds) 1 962599
convolve in C (seconds, microseconds) 0 87024
Thus, the C implementation is about 1000 times faster than the python implementation and a bit more than 20 times as fast as the scipy implementation (admittedly, the scipy implementation is more versatile).
EDIT: This does not solve the original question exactly but is sufficient for my purposes.