Given inputs of:
present value = 11, SUMMATIONS of future values that = 126, and n = 7 (periods of change)
how can I solve for a rate of chain that would create a chain that sums into being the FV? This is different from just solving for a rate of return between 11 and 126. This is solving for the rate of return that allows the summation to 126. Ive been trying different ideas and looking up IRR and NPV functions but the summation aspect is stumping me.
In case the summation aspect isn't clear, if I assume a rate of 1.1, that would turn PV = 11 into a list like so (that adds up to nearly the FV 126), how can I solve for r only knowing n, summation fv and pv?:
11
12.1
13.31
14.641
16.1051
17.71561
19.487171
21.4358881
total = 125.7947691
Thank you.
EDIT:
I attempted to create a sort of iterator, but it's hanging after the first loop...
for r in (1.01,1.02,1.03,1.04,1.05,1.06,1.07,1.08,1.09,1.10,1.11,1.12):
print r
test = round(11* (1-r**8) / (1 - r),0)
print test
while True:
if round(126,0) == round(11* (1-r**8) / (1 - r),0):
answer = r
break
else:
pass
EDIT 2:
IV = float(11)
SV = float(126)
N = 8
# sum of a geometric series: (SV = IV * (1 - r^n) / (1 - r )
# r^n - (SV/PV)*r + ((SV - IV)/IV) = 0
# long form polynomial to be solved, with an n of 3 for example:
# 1r^n + 0r^n + 0r^n + -(SV/PV)r + ((SV - IV)/IV)
# each polynomial coefficient can go into numpy.roots to solve
# for the r that solves for the abcd * R = 0 above.
import numpy
array = numpy.roots([1.,0.,0.,0.,0.,0.,0.,(-SV)/IV,(SV-IV)/IV])
for i in array:
if i > 1:
a = str(i)
b = a.split("+")
answer = float(b[0])
print answer
I'm getting a ValueError that my string "1.10044876702" cant be converted to float. any ideas?
SOLVED: i.real gets the real part of it. no need for split or string conversion ie:
for i in array:
if i > 1:
a = i.real
answer = float(a)
print answer
Sum of a geometric series
Subbing in,
126 = 11 * (1 - r**8) / (1 - r)
where we need to solve for r. After rearranging,
r**8 - (126/11)*r + (115/11) = 0
then using NumPy
import numpy as np
np.roots([1., 0., 0., 0., 0., 0., 0., -126./11, 115./11])
gives
array([-1.37597528+0.62438671j, -1.37597528-0.62438671j,
-0.42293755+1.41183514j, -0.42293755-1.41183514j,
0.74868844+1.1640769j , 0.74868844-1.1640769j ,
1.10044877+0.j , 1.00000000+0.j ])
where the first six roots are imaginary and the last is invalid (gives a div-by-0 in the original equation), so the only useable answer is r = 1.10044877.
Edit:
Per the Numpy docs, np.root expects an array-like object (aka a list) containing the polynomial coefficients. So the parameters above can be read as 1.0*r^8 + 0.*r^7 + 0.*r^6 + 0.*r^5 + 0.*r^4 + 0.*r^3 + 0.*r^2 + -126./11*r + 115./11, which is the polynomial to be solved.
Your iterative solver is pretty crude; it will get you a ballpark answer, but the calculation time is exponential with the desired degree of accuracy. We can do much better!
No general analytic solution is known for an eighth-order equation, so some numeric method is needed.
If you really want to code your own solver from scratch, the simplest is Newton-Raphson method - start with a guess, then iteratively evaluate the function and offset your guess by the error divided by the first derivative to hopefully converge on a root - and hope that your initial guess is a good one and your equation has real roots.
If you care more about getting good answers quickly, np.root is hard to beat - it computes the eigenvectors of the companion matrix to simultaneously find all roots, both real and complex.
Edit 2:
Your iterative solver is hanging because of your while True clause - r never changes in the loop, so you will never break. Also, else: pass is redundant and can be removed.
After a good bit of rearranging, your code becomes:
import numpy as np
def iterative_test(rng, fn, goal):
return min(rng, key=lambda x: abs(goal - fn(x)))
rng = np.arange(1.01, 1.20, 0.01)
fn = lambda x: 11. * (1. - x**8) / (1. - x)
goal = 126.
sol = iterative_test(rng, fn, goal)
print('Solution: {} -> {}'.format(sol, fn(sol)))
which results in
Solution: 1.1 -> 125.7947691
Edit 3:
Your last solution is looking much better, but you must keep in mind that the degree of the polynomial (and hence the length of the array passed to np.roots) changes as the number of periods changes.
import numpy as np
def find_rate(present_value, final_sum, periods):
"""
Given the initial value, sum, and number of periods in
a geometric series, solve for the rate of growth.
"""
# The formula for the sum of a geometric series is
# final_sum = sum_i[0..periods](present_value * rate**i)
# which can be reduced to
# final_sum = present_value * (1 - rate**(periods+1) / (1 - rate)
# and then rearranged as
# rate**(periods+1) - (final_sum / present_value)*rate + (final_sum / present_value - 1) = 0
# Build the polynomial
poly = [0.] * (periods + 2)
poly[ 0] = 1.
poly[-2] = -1. * final_sum / present_value
poly[-1] = 1. * final_sum / present_value - 1.
# Find the roots
roots = np.roots(poly)
# Discard unusable roots
roots = [rt for rt in roots if rt.imag == 0. and rt.real != 1.]
# Should be zero or one roots left
if len(roots):
return roots[0].real
else:
raise ValueError('no solution found')
def main():
pv, fs, p = 11., 126., 7
print('Solution for present_value = {}, final_sum = {}, periods = {}:'.format(pv, fs, p))
print('rate = {}'.format(find_rate(pv, fs, p)))
if __name__=="__main__":
main()
This produces:
Solution for present_value = 11.0, final_sum = 126.0, periods = 7:
rate = 1.10044876702
Solving the polynomial roots is overkill. This computation is usually made with a solver such as the Newton method directly applied to the exponential formula. Works for fractional durations too.
For example, https://math.stackexchange.com/questions/502976/newtons-method-annuity-due-equation
Related
I do have a function, for example , but this can be something else as well, like a quadratic or logarithmic function. I am only interested in the domain of . The parameters of the function (a and k in this case) are known as well.
My goal is to fit a continuous piece-wise function to this, which contains alternating segments of linear functions (i.e. sloped straight segments, each with intercept of 0) and constants (i.e. horizontal segments joining the sloped segments together). The first and last segments are both sloped. And the number of segments should be pre-selected between around 9-29 (that is 5-15 linear steps + 4-14 constant plateaus).
Formally
The input function:
The fitted piecewise function:
I am looking for the optimal resulting parameters (c,r,b) (in terms of least squares) if the segment numbers (n) are specified beforehand.
The resulting constants (c) and the breakpoints (r) should be whole natural numbers, and the slopes (b) round two decimal point values.
I have tried to do the fitting numerically using the pwlf package using a segmented constant models, and further processed the resulting constant model with some graphical intuition to "slice" the constant steps with the slopes. It works to some extent, but I am sure this is suboptimal from both fitting perspective and computational efficiency. It takes multiple minutes to generate a fitting with 8 slopes on the range of 1-50000. I am sure there must be a better way to do this.
My idea would be to instead using only numerical methods/ML, the fact that we have the algebraic form of the input function could be exploited in some way to at least to use algebraic transforms (integrals) to get to a simpler optimization problem.
import numpy as np
import matplotlib.pyplot as plt
import pwlf
# The input function
def input_func(x,k,a):
return np.power(x,1/a)*k
x = np.arange(1,5e4)
y = input_func(x, 1.8, 1.3)
plt.plot(x,y);
def pw_fit(func, x_r, no_seg, *fparams):
# working on the specified range
x = np.arange(1,x_r)
y_input = func(x, *fparams)
my_pwlf = pwlf.PiecewiseLinFit(x, y_input, degree=0)
res = my_pwlf.fit(no_seg)
yHat = my_pwlf.predict(x)
# Function values at the breakpoints
y_isec = func(res, *fparams)
# Slope values at the breakpoints
slopes = np.round(y_isec / res, decimals=2)
slopes = slopes[1:]
# For the first slope value, I use the intersection of the first constant plateau and the input function
slopes = np.insert(slopes,0,np.round(y_input[np.argwhere(np.diff(np.sign(y_input - yHat))).flatten()[0]] / np.argwhere(np.diff(np.sign(y_input - yHat))).flatten()[0], decimals=2))
plateaus = np.unique(np.round(yHat))
# If due to rounding slope values (to two decimals), there is no change in a subsequent step, I just remove those segments
to_del = np.argwhere(np.diff(slopes) == 0).flatten()
slopes = np.delete(slopes,to_del + 1)
plateaus = np.delete(plateaus,to_del)
breakpoints = [np.ceil(plateaus[0]/slopes[0])]
for idx, j in enumerate(slopes[1:-1]):
breakpoints.append(np.floor(plateaus[idx]/j))
breakpoints.append(np.ceil(plateaus[idx+1]/j))
breakpoints.append(np.floor(plateaus[-1]/slopes[-1]))
return slopes, plateaus, breakpoints
slo, plat, breaks = pw_fit(input_func, 50000, 8, 1.8, 1.3)
# The piecewise function itself
def pw_calc(x, slopes, plateaus, breaks):
x = x.astype('float')
cond_list = [x < breaks[0]]
for idx, j in enumerate(breaks[:-1]):
cond_list.append((j <= x) & (x < breaks[idx+1]))
cond_list.append(breaks[-1] <= x)
func_list = [lambda x: x * slopes[0]]
for idx, j in enumerate(slopes[1:]):
func_list.append(plateaus[idx])
func_list.append(lambda x, j=j: x * j)
return np.piecewise(x, cond_list, func_list)
y_output = pw_calc(x, slo, plat, breaks)
plt.plot(x,y,y_output);
(Not important, but I think the fitted piecewise function is not continuous as it is. Intervals should be x<=r1; r1<x<=r2; ....)
As Anatolyg has pointed out, it looks to me that in the optimal solution (for the function posted at least, and probably for any where the derivative is different from zero), the horizantal segments will collapse to a point or the minimum segment length (in this case 1).
EDIT---------------------------------------------
The behavior above could only be valid if the slopes could have an intercept. If the intercepts are zero, as posted in the question, one consideration must be taken into account: Is the initial parabolic function defined in zero or nearby? Imagine the function y=0.001 *sqrt(x-1000), then the segments defined as b*x will have a slope close to zero and will be so similar to the constant segments that the best fit will be just the line that without intercept that fits better all the function.
Provided that the function is defined in zero or nearby, you can start by approximating the curve just by linear segments (with intercepts):
divide the function domain in N intervals(equal intervals or whose size is a function of the average curvature (or second derivative) of the function along the domain).
linear fit/regression in each intervals
for each interval, if a point (or bunch of points) in the extreme of any interval is better fitted by the line of the neighbor interval than the line of its interval, this point is assigned to the neighbor interval.
Repeat from 2) until no extreme points are moved.
Linear regressions might be optimized not to calculate all the covariance matrixes from scratch on each iteration, but just adding the contributions of the moved points to the previous covariance matrixes.
Then each linear segment (LSi) is replaced by a combination of a small constant segment at the beginning (Cbi), a linear segment without intercept (Si), and another constant segment at the end (Cei). This segments are easy to calculate as Si will contain the middle point of LSi, and Cbi and Cei will have respectively the begin and end values of the segment LSi. Then the intervals of each segment has to be calculated as an intersection between lines.
With this, the constant end segment will be collinear with the constant begin segment from the next interval so they will merge, resulting in a series of constant and linear segments interleaved.
But this would be a floating point start solution. Next, you will have to apply all the roundings which will mess up quite a lot all the segments as the conditions integer intervals and linear segments without slope can be very confronting. In fact, b,c,r are not totally independent. If ci and ri+1 are known, then bi+1 is already fixed
If nothing is broken so far, the final task will be to minimize the error/cost function (I assume that it will be the integral of the error between the parabolic function and the segments). My guess is that gradients here will be quite a pain, as if you change for example one ci, all the rest of the bj and cj will have to adapt as well due to the integer intervals restriction. However, if you can generalize the derivatives between parameters ( how much do I have to adapt bi+1 if ci changes a unit), you can propagate the change of one parameter to all other parameters and have kind of a gradient. Then for each interval, you can estimate what would be the ideal parameter and averaging all intervals calculate the best gradient step. Let me illustrate this:
Assuming first that r parameters are fixed, if I change c1 by one unit, b2 changes by 0.1, c2 changes by -0.2 and b3 changes by 0.2. This would be the gradient.
Then I estimate, comparing with the parabolic curve, that c1 should increase 0.5 (to reduce the cost by 10 points), b2 should increase 0.2 (to reduce the cost by 5 points), c2 should increase 0.2 (to reduce the cost by 6 points) and b3 should increase 0.1 (to reduce the cost by 9 points).
Finally, the gradient step would be (0.5/1·10 + 0.2/0.1·5 - 0.2/(-0.2)·6 + 0.1/0.2·9)/(10 + 5 + 6 + 9)~= 0.45. Thus, c1 would increase 0.45 units, b2 would increase 0.45·0.1, and so on.
When you add the r parameters to the pot, as integer intervals do not have an proper derivative, calculation is not straightforward. However, you can consider r parameters as floating points, calculate and apply the gradient step and then apply the roundings.
We can integrate the squared error function for linear and constant pieces and let SciPy optimize it. Python 3:
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize
xl = 1
xh = 50000
a = 1.3
p = 1 / a
n = 8
def split_b_and_c(bc):
return bc[::2], bc[1::2]
def solve_for_r(b, c):
r = np.empty(2 * n)
r[0] = xl
r[1:-1:2] = c / b[:-1]
r[2::2] = c / b[1:]
r[-1] = xh
return r
def linear_residual_integral(b, x):
return (
(x ** (2 * p + 1)) / (2 * p + 1)
- 2 * b * x ** (p + 2) / (p + 2)
+ b ** 2 * x ** 3 / 3
)
def constant_residual_integral(c, x):
return x ** (2 * p + 1) / (2 * p + 1) - 2 * c * x ** (p + 1) / (p + 1) + c ** 2 * x
def squared_error(bc):
b, c = split_b_and_c(bc)
r = solve_for_r(b, c)
linear = np.sum(
linear_residual_integral(b, r[1::2]) - linear_residual_integral(b, r[::2])
)
constant = np.sum(
constant_residual_integral(c, r[2::2])
- constant_residual_integral(c, r[1:-1:2])
)
return linear + constant
def evaluate(x, b, c, r):
i = 0
while x > r[i + 1]:
i += 1
return b[i // 2] * x if i % 2 == 0 else c[i // 2]
def main():
bc0 = (xl + (xh - xl) * np.arange(1, 4 * n - 2, 2) / (4 * n - 2)) ** (
p - 1 + np.arange(2 * n - 1) % 2
)
bc = scipy.optimize.minimize(
squared_error, bc0, bounds=[(1e-06, None) for i in range(2 * n - 1)]
).x
b, c = split_b_and_c(bc)
r = solve_for_r(b, c)
X = np.linspace(xl, xh, 1000)
Y = [evaluate(x, b, c, r) for x in X]
plt.plot(X, X ** p)
plt.plot(X, Y)
plt.show()
if __name__ == "__main__":
main()
I have tried to come up with a new solution myself, based on the idea of #Amo Robb, where I have partitioned the domain, and curve fitted a dual - constant and linear - piece together (with the help of np.maximum). I have used the 1 / f(x)' as the function to designate the breakpoints, but I know this is arbitrary and does not provide a global optimum. Maybe there is some optimal function for these breakpoints. But this solution is OK for me, as it might be appropriate to have a better fit at the first segments, at the expense of the error for the later segments. (The task itself is actually a cost based retail margin calculation {supply price -> added margin}, as the retail POS software can only work with such piecewise margin function).
The answer from #David Eisenstat is correct optimal solution if the parameters are allowed to be floats. Unfortunately the POS software can not use floats. It is OK to round up c-s and r-s afterwards. But the b-s should be rounded to two decimals, as those are inputted as percents, and this constraint would ruin the optimal solution with long floats. I will try to further improve my solution with both Amo's and David's valuable input. Thank You for that!
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
# The input function f(x)
def input_func(x,k,a):
return np.power(x,1/a) * k
# 1 / f(x)'
def one_per_der(x,k,a):
return a / (k * np.power(x, 1/a-1))
# 1 / f(x)' inverted
def one_per_der_inv(x,k,a):
return np.power(a / (x*k), a / (1-a))
def segment_fit(start,end,y,first_val):
b, _ = curve_fit(lambda x,b: np.maximum(first_val, b*x), np.arange(start,end), y[start-1:end-1])
b = float(np.round(b, decimals=2))
bp = np.round(first_val / b)
last_val = np.round(b * end)
return b, bp, last_val
def pw_fit(end_range, no_seg, **fparams):
y_bps = np.linspace(one_per_der(1, **fparams), one_per_der(end_range,**fparams) , no_seg+1)[1:]
x_bps = np.round(one_per_der_inv(y_bps, **fparams))
y = input_func(x, **fparams)
slopes = [np.round(float(curve_fit(lambda x,b: x * b, np.arange(1,x_bps[0]), y[:int(x_bps[0])-1])[0]), decimals = 2)]
plats = [np.round(x_bps[0] * slopes[0])]
bps = []
for i, xbp in enumerate(x_bps[1:]):
b, bp, last_val = segment_fit(int(x_bps[i]+1), int(xbp), y, plats[i])
slopes.append(b); bps.append(bp); plats.append(last_val)
breaks = sorted(list(x_bps) + bps)[:-1]
# If due to rounding slope values (to two decimals), there is no change in a subsequent step, I just remove those segments
to_del = np.argwhere(np.diff(slopes) == 0).flatten()
breaks_to_del = np.concatenate((to_del * 2, to_del * 2 + 1))
slopes = np.delete(slopes,to_del + 1)
plats = np.delete(plats[:-1],to_del)
breaks = np.delete(breaks,breaks_to_del)
return slopes, plats, breaks
def pw_calc(x, slopes, plateaus, breaks):
x = x.astype('float')
cond_list = [x < breaks[0]]
for idx, j in enumerate(breaks[:-1]):
cond_list.append((j <= x) & (x < breaks[idx+1]))
cond_list.append(breaks[-1] <= x)
func_list = [lambda x: x * slopes[0]]
for idx, j in enumerate(slopes[1:]):
func_list.append(plateaus[idx])
func_list.append(lambda x, j=j: x * j)
return np.piecewise(x, cond_list, func_list)
fparams = {'k':1.8, 'a':1.2}
end_range = 5e4
no_steps = 10
x = np.arange(1, end_range)
y = input_func(x, **fparams)
slopes, plats, breaks = pw_fit(end_range, no_steps, **fparams)
y_output = pw_calc(x, slopes, plats, breaks)
plt.plot(x,y_output,y);
I am new to python and I am working on a finance project to solve a set of equations that enables me to go from par spread to flat spread in terms of CDS.
I have a set of data for the upfront (U) and years (i), where to set the data sample, I name upfront with x and years in y
x = [-0.007,-0.01,-0.009,-0.004,0.005,0.011,0.018,0.027,0.037,0.048]
y = [1,2,3,4,5,6,7,8,9,10]
Here are the 3 equations that I am trying to solve together:
U = A(s(i)-c)
L(i) = 1 - (1 - (s(i) / (1 - R)) ** i) / (1 - (1 / (s(i-1) - R)) ** (i - 1))
A = sum([((1 - L(i)) / (1 + r)) ** j for j in range(1, i+1)])
Detailed explanation:
The goal is to solve and list the results for all 10 values of variable s
1st equation is used to calculate the upfront amount, where s is unknown
2nd equation is used to calculate the hazard rate L, where R is recovery rate, s(i) is the current s term and s(i-1) is the previous s term.
Visual representation of equation 2:
3rd equation is used to calculate the annual risky annuity, the purpose of this equation is to calculate and sum the risk annuities. For example, if i=1, then there should be one term in the equation. If i=2, then there should be 2 terms in the equation where they are summed. This repeats until the 10th iteration where there are 10 values and they are summed.
Visual representation of equation 3:
To attempt to solve the problem, I wrote the following code (which doesn't run yet):
x = [-0.007,-0.01,-0.009,-0.004,0.005,0.011,0.018,0.027,0.037,0.048]
y = [1,2,3,4,5,6,7,8,9,10]
c = 0.01
r = 0.01
R = 0.4
def eqs(s, U, t, c=0.01, r=0.01, R=0.4):
L = 1 - (1 - (s / (1 - R)) ** t) / (1 - (1 / (1 - R)) ** (t - 1))
A = sum([((1 - L) / (1 + r)) ** j for j in range(1, i+1)])
s = (U/A) + c
return L, A, s
for U, t in zip(x, y):
s = fsolve(eq1, 0.01, (U, t,))
print(s, U, t)
Main obstacles:
I haven't found a way where I can make Equation 3 work.
I also haven't been able to pass through 2 sets of values into the for loop that then calls the function
I wasn't able to loop the previous spread value, s(i-1), back into the iteration to compute the next value
I was able to solve it manually on python by changing the third equation every iteration and inputting the previous results
I am hoping I can find some solution to my problem, thank you for your help in advance!
It took me a bit but I think I got it. Your main problem is that you can't code formulas which describe a complex problem, then call a 'magic' fsolve function and hoping that python will solve it for you, without even defining what is the unknown.
It doesn't work that way. You have to make your problem simple enough so that it can be solved with existing functions from some libraries. Python has no form of intelligence nor divination.
As I said in my comments, the fsolve() from scipy.optimise can only solve problems of the form f(x)=0.
If you want to use it, you have to transform your complex problem in a simple f(x)=0. problem.
Starting from your 3rd equation s = (U/A) + c we can deduce that s - (U/A) - c = 0
Given that A is a function of L and L is a function of s, if you define a function f(s)= s - (U/A) - c then s is the solution of f(s)=0.
It is what I did in the following code :
from scipy.optimize import fsolve
def Lambda(s,sold,R,t):
num = (1 - s / (1 - R)) ** t
den = (1 - sold / (1 - R)) ** (t - 1)
return 1-num/den
def Annuity(L,r,Aold,j):
return Aold + ((1 - L) / (1 + r)) ** j
def f(s,U, sold,R,t,r,Aold,j):
L=Lambda(s,sold,R,t)
A=Annuity(L,r,Aold,j)
return s - (U/A) - c
x = [-0.007,-0.01,-0.009,-0.004,0.005,0.011,0.018,0.027,0.037,0.048]
y = [1,2,3,4,5,6,7,8,9,10]
c = 0.01
r = 0.01
R = 0.4
sold=0.
Aold=0.
for n,(U, t) in enumerate(zip(x, y)):
j=n+1
print("j={},U={},t={}".format(j,U,t))
init = 0.01 # The starting estimate for the roots of f(s) = 0.
roots = fsolve(f,init,args=(U, sold,R,t,r,Aold,j))
s=roots[0]
L=Lambda(s,sold,R,t)
A=Annuity(L,r,Aold,j)
print("s={},L={},A={}".format(s,L,A))
print
sold=s
Aold=A
It gives following outputs :
j=1,U=-0.007,t=1
s=0.00289571337037,L=0.00482618895061,A=0.985320604999
j=2,U=-0.01,t=2
s=0.00485464221105,L=0.0113452406083,A=1.94349944361
j=3,U=-0.009,t=3
s=0.00685582655826,L=0.0180633847507,A=2.86243751076
j=4,U=-0.004,t=4
s=0.00892769166807,L=0.0251666093582,A=3.73027037175
j=5,U=0.005,t=5
s=0.0111024600844,L=0.0328696834011,A=4.53531159145
j=6,U=0.011,t=6
s=0.0120640333844,L=0.0280806661972,A=5.32937116379
j=7,U=0.018,t=7
s=0.0129604367831,L=0.0305170484121,A=6.08018387787
j=8,U=0.027,t=8
s=0.0139861021632,L=0.0351929301367,A=6.77353436882
j=9,U=0.037,t=9
s=0.0149883645118,L=0.0382416644539,A=7.41726068981
j=10,U=0.048,t=10
s=0.0159931206639,L=0.041597709395,A=8.00918297693
No idea if it's correct, but it looks likely to me. I guess you got the idea now and you will be able to make some adjustments.
THIS PART IS JUST BACKGROUND IF YOU NEED IT
I am developing a numerical solver for the Second-Order Kuramoto Model. The functions I use to find the derivatives of theta and omega are given below.
# n-dimensional change in omega
def d_theta(omega):
return omega
# n-dimensional change in omega
def d_omega(K,A,P,alpha,mask,n):
def layer1(theta,omega):
T = theta[:,None] - theta
A[mask] = K[mask] * np.sin(T[mask])
return - alpha*omega + P - A.sum(1)
return layer1
These equations return vectors.
QUESTION 1
I know how to use odeint for two dimensions, (y,t). for my research I want to use a built-in Python function that works for higher dimensions.
QUESTION 2
I do not necessarily want to stop after a predetermined amount of time. I have other stopping conditions in the code below that will indicate whether the system of equations converges to the steady state. How do I incorporate these into a built-in Python solver?
WHAT I CURRENTLY HAVE
This is the code I am currently using to solve the system. I just implemented RK4 with constant time stepping in a loop.
# This function randomly samples initial values in the domain and returns whether the solution converged
# Inputs:
# f change in theta (d_theta)
# g change in omega (d_omega)
# tol when step size is lower than tolerance, the solution is said to converge
# h size of the time step
# max_iter maximum number of steps Runge-Kutta will perform before giving up
# max_laps maximum number of laps the solution can do before giving up
# fixed_t vector of fixed points of theta
# fixed_o vector of fixed points of omega
# n number of dimensions
# theta initial theta vector
# omega initial omega vector
# Outputs:
# converges true if it nodes restabilizes, false otherwise
def kuramoto_rk4_wss(f,g,tol_ss,tol_step,h,max_iter,max_laps,fixed_o,fixed_t,n):
def layer1(theta,omega):
lap = np.zeros(n, dtype = int)
converges = False
i = 0
tau = 2 * np.pi
while(i < max_iter): # perform RK4 with constant time step
p_omega = omega
p_theta = theta
T1 = h*f(omega)
O1 = h*g(theta,omega)
T2 = h*f(omega + O1/2)
O2 = h*g(theta + T1/2,omega + O1/2)
T3 = h*f(omega + O2/2)
O3 = h*g(theta + T2/2,omega + O2/2)
T4 = h*f(omega + O3)
O4 = h*g(theta + T3,omega + O3)
theta = theta + (T1 + 2*T2 + 2*T3 + T4)/6 # take theta time step
mask2 = np.array(np.where(np.logical_or(theta > tau, theta < 0))) # find which nodes left [0, 2pi]
lap[mask2] = lap[mask2] + 1 # increment the mask
theta[mask2] = np.mod(theta[mask2], tau) # take the modulus
omega = omega + (O1 + 2*O2 + 2*O3 + O4)/6
if(max_laps in lap): # if any generator rotates this many times it probably won't converge
break
elif(np.any(omega > 12)): # if any of the generators is rotating this fast, it probably won't converge
break
elif(np.linalg.norm(omega) < tol_ss and # assert the nodes are sufficiently close to the equilibrium
np.linalg.norm(omega - p_omega) < tol_step and # assert change in omega is small
np.linalg.norm(theta - p_theta) < tol_step): # assert change in theta is small
converges = True
break
i = i + 1
return converges
return layer1
Thanks for your help!
You can wrap your existing functions into a function accepted by odeint (option tfirst=True) and solve_ivp as
def odesys(t,u):
theta,omega = u[:n],u[n:]; # or = u.reshape(2,-1);
return [ *f(omega), *g(theta,omega) ]; # or np.concatenate([f(omega), g(theta,omega)])
u0 = [*theta0, *omega0]
t = linspan(t0, tf, timesteps+1);
u = odeint(odesys, u0, t, tfirst=True);
#or
res = solve_ivp(odesys, [t0,tf], u0, t_eval=t)
The scipy methods pass numpy arrays and convert the return value into same, so that you do not have to care in the ODE function. The variant in comments is using explicit numpy functions.
While solve_ivp does have event handling, using it for a systematic collection of events is rather cumbersome. It would be easier to advance some fixed step, do the normalization and termination detection, and then repeat this.
If you want to later increase efficiency somewhat, use directly the stepper classes behind solve_ivp.
My implementation of steepest descent for solving Ax = b is showing some weird behavior: for any matrix large enough (~10 x 10, have only tested square matrices so far), the returned x contains all huge values (on the order of 1x10^10).
def steepestDescent(A, b, numIter=100, x=None):
"""Solves Ax = b using steepest descent method"""
warnings.filterwarnings(action="error",category=RuntimeWarning)
# Reshape b in case it has shape (nL,)
b = b.reshape(len(b), 1)
exes = []
res = []
# Make a guess for x if none is provided
if x==None:
x = np.zeros((len(A[0]), 1))
exes.append(x)
for i in range(numIter):
# Re-calculate r(i) using r(i) = b - Ax(i) every five iterations
# to prevent roundoff error. Also calculates initial direction
# of steepest descent.
if (numIter % 5)==0:
r = b - np.dot(A, x)
# Otherwise use r(i+1) = r(i) - step * Ar(i)
else:
r = r - step * np.dot(A, r)
res.append(r)
# Calculate step size. Catching the runtime warning allows the function
# to stop and return before all iterations are completed. This is
# necessary because once the solution x has been found, r = 0, so the
# calculation below divides by 0, turning step into "nan", which then
# goes on to overwrite the correct answer in x with "nan"s
try:
step = np.dot(r.T, r) / np.dot( np.dot(r.T, A), r )
except RuntimeWarning:
warnings.resetwarnings()
return x
# Update x
x = x + step * r
exes.append(x)
warnings.resetwarnings()
return x, exes, res
(exes and res are returned for debugging)
I assume the problem must be with calculating r or step (or some deeper issue) but I can't make out what it is.
The code seems correct. For example, the following test work for me (both linalg.solve and steepestDescent give the close answer, most of the time):
import numpy as np
n = 100
A = np.random.random(size=(n,n)) + 10 * np.eye(n)
print(np.linalg.eig(A)[0])
b = np.random.random(size=(n,1))
x, xs, r = steepestDescent(A,b, numIter=50)
print(x - np.linalg.solve(A,b))
The problem is in the math. This algorithm is guaranteed to converge to the correct solution if A is positive definite matrix. By adding the 10 * identity matrix to a random matrix, we increase the probability that all the eigen-values are positive
If you test with large random matrices (for example A = random.random(size=(n,n)), you are almost certain to have a negative eigenvalue, and the algorithm will not converge.
Short summary: How do I quickly calculate the finite convolution of two arrays?
Problem description
I am trying to obtain the finite convolution of two functions f(x), g(x) defined by
To achieve this, I have taken discrete samples of the functions and turned them into arrays of length steps:
xarray = [x * i / steps for i in range(steps)]
farray = [f(x) for x in xarray]
garray = [g(x) for x in xarray]
I then tried to calculate the convolution using the scipy.signal.convolve function. This function gives the same results as the algorithm conv suggested here. However, the results differ considerably from analytical solutions. Modifying the algorithm conv to use the trapezoidal rule gives the desired results.
To illustrate this, I let
f(x) = exp(-x)
g(x) = 2 * exp(-2 * x)
the results are:
Here Riemann represents a simple Riemann sum, trapezoidal is a modified version of the Riemann algorithm to use the trapezoidal rule, scipy.signal.convolve is the scipy function and analytical is the analytical convolution.
Now let g(x) = x^2 * exp(-x) and the results become:
Here 'ratio' is the ratio of the values obtained from scipy to the analytical values. The above demonstrates that the problem cannot be solved by renormalising the integral.
The question
Is it possible to use the speed of scipy but retain the better results of a trapezoidal rule or do I have to write a C extension to achieve the desired results?
An example
Just copy and paste the code below to see the problem I am encountering. The two results can be brought to closer agreement by increasing the steps variable. I believe that the problem is due to artefacts from right hand Riemann sums because the integral is overestimated when it is increasing and approaches the analytical solution again as it is decreasing.
EDIT: I have now included the original algorithm 2 as a comparison which gives the same results as the scipy.signal.convolve function.
import numpy as np
import scipy.signal as signal
import matplotlib.pyplot as plt
import math
def convolveoriginal(x, y):
'''
The original algorithm from http://www.physics.rutgers.edu/~masud/computing/WPark_recipes_in_python.html.
'''
P, Q, N = len(x), len(y), len(x) + len(y) - 1
z = []
for k in range(N):
t, lower, upper = 0, max(0, k - (Q - 1)), min(P - 1, k)
for i in range(lower, upper + 1):
t = t + x[i] * y[k - i]
z.append(t)
return np.array(z) #Modified to include conversion to numpy array
def convolve(y1, y2, dx = None):
'''
Compute the finite convolution of two signals of equal length.
#param y1: First signal.
#param y2: Second signal.
#param dx: [optional] Integration step width.
#note: Based on the algorithm at http://www.physics.rutgers.edu/~masud/computing/WPark_recipes_in_python.html.
'''
P = len(y1) #Determine the length of the signal
z = [] #Create a list of convolution values
for k in range(P):
t = 0
lower = max(0, k - (P - 1))
upper = min(P - 1, k)
for i in range(lower, upper):
t += (y1[i] * y2[k - i] + y1[i + 1] * y2[k - (i + 1)]) / 2
z.append(t)
z = np.array(z) #Convert to a numpy array
if dx != None: #Is a step width specified?
z *= dx
return z
steps = 50 #Number of integration steps
maxtime = 5 #Maximum time
dt = float(maxtime) / steps #Obtain the width of a time step
time = [dt * i for i in range (steps)] #Create an array of times
exp1 = [math.exp(-t) for t in time] #Create an array of function values
exp2 = [2 * math.exp(-2 * t) for t in time]
#Calculate the analytical expression
analytical = [2 * math.exp(-2 * t) * (-1 + math.exp(t)) for t in time]
#Calculate the trapezoidal convolution
trapezoidal = convolve(exp1, exp2, dt)
#Calculate the scipy convolution
sci = signal.convolve(exp1, exp2, mode = 'full')
#Slice the first half to obtain the causal convolution and multiply by dt
#to account for the step width
sci = sci[0:steps] * dt
#Calculate the convolution using the original Riemann sum algorithm
riemann = convolveoriginal(exp1, exp2)
riemann = riemann[0:steps] * dt
#Plot
plt.plot(time, analytical, label = 'analytical')
plt.plot(time, trapezoidal, 'o', label = 'trapezoidal')
plt.plot(time, riemann, 'o', label = 'Riemann')
plt.plot(time, sci, '.', label = 'scipy.signal.convolve')
plt.legend()
plt.show()
Thank you for your time!
or, for those who prefer numpy to C. It will be slower than the C implementation, but it's just a few lines.
>>> t = np.linspace(0, maxtime-dt, 50)
>>> fx = np.exp(-np.array(t))
>>> gx = 2*np.exp(-2*np.array(t))
>>> analytical = 2 * np.exp(-2 * t) * (-1 + np.exp(t))
this looks like trapezoidal in this case (but I didn't check the math)
>>> s2a = signal.convolve(fx[1:], gx, 'full')*dt
>>> s2b = signal.convolve(fx, gx[1:], 'full')*dt
>>> s = (s2a+s2b)/2
>>> s[:10]
array([ 0.17235682, 0.29706872, 0.38433313, 0.44235042, 0.47770012,
0.49564748, 0.50039326, 0.49527721, 0.48294359, 0.46547582])
>>> analytical[:10]
array([ 0. , 0.17221333, 0.29682141, 0.38401317, 0.44198216,
0.47730244, 0.49523485, 0.49997668, 0.49486489, 0.48254154])
largest absolute error:
>>> np.max(np.abs(s[:len(analytical)-1] - analytical[1:]))
0.00041657780840698155
>>> np.argmax(np.abs(s[:len(analytical)-1] - analytical[1:]))
6
Short answer: Write it in C!
Long answer
Using the cookbook about numpy arrays I rewrote the trapezoidal convolution method in C. In order to use the C code one requires three files (https://gist.github.com/1626919)
The C code (performancemodule.c).
The setup file to build the code and make it callable from python (performancemodulesetup.py).
The python file that makes use of the C extension (performancetest.py)
The code should run upon downloading by doing the following
Adjust the include path in performancemodule.c.
Run the following
python performancemodulesetup.py build
python performancetest.py
You may have to copy the library file performancemodule.so or performancemodule.dll into the same directory as performancetest.py.
Results and performance
The results agree neatly with one another as shown below:
The performance of the C method is even better than scipy's convolve method. Running 10k convolutions with array length 50 requires
convolve (seconds, microseconds) 81 349969
scipy.signal.convolve (seconds, microseconds) 1 962599
convolve in C (seconds, microseconds) 0 87024
Thus, the C implementation is about 1000 times faster than the python implementation and a bit more than 20 times as fast as the scipy implementation (admittedly, the scipy implementation is more versatile).
EDIT: This does not solve the original question exactly but is sufficient for my purposes.