I am currently trying to find the maximum radius of a circle I can manifest between existing circles around it.
i.e. I'm trying to find not only the maximum radius, but the center point most suited for it over a specific given straight line.
In order to find said maxima I'm trying to implement a generalized Lagrange multipliers solution using sympy.
If "n" represents the amount of constraints I have, then I was able to:
Create n symbols generator.
Perform the necessary nth-gradient over the Lagrange function
Manifest the required inequalities (from constraints) to achieve the list of equalities and inequalities needed to be solved.
The code:
from sympy import S
from sympy import *
import sympy as smp
#Lagrange Multipliers
def sympy_distfun(cx,cy,radius):
x,y=smp.symbols('x y',real=True)
return sqrt((x-cx)**2+(y-cy)**2)-radius
def sympy_circlefun(cx,cy,radius):
x,y=smp.symbols('x y',real=True)
return (x-cx)**2+(y-cy)**2-radius**2
def sympy_linefun(slope,b):
x,y=smp.symbols('x y',real=True)
return slope*x+b-y
def lagrange_multiplier(objective,constraints):
x,y=smp.symbols('x y',real=True)
a=list(smp.symbols('a0:%d'%len(constraints),real=True))
cons=[constraints[i]*a[i] for i in range(len(a))]
L=objective+(-1)*sum(cons)
gradL=[smp.diff(L,var) for var in [x,y]+a]
constraints=[(con)>= 0 for con in constraints]
eqs=gradL+constraints
vars=a+[x,y]
solution=smp.solve(eqs[0],vars)
#solution=smp.solveset(eqs,vars)
print(solution)
line=sympy_linefun(0.66666,-4.3333)
dist=sympy_distfun(11,3,4)
circlefunc1=sympy_circlefun(11,3,4)
circlefunc2=sympy_circlefun(0,0,3)
lagrange_multiplier(dist,[line,circlefunc1,circlefunc2])
But, when using smp.solveset(eqs,vars) I encounter the error message:
ValueError: [-0.66666*a0 - a1*(2*x - 22) - 2*a2*x + (x - 11)/sqrt((x - 11)**2 + (y - 3)**2), a0 - a1*(2*y - 6) - 2*a2*y + (y - 3)/sqrt((x - 11)**2 + (y - 3)**2), -0.66666*x + y + 4.3333, -(x - 11)**2 - (y - 3)**2 + 16, -x**2 - y**2 + 9, 0.66666*x - y - 4.3333 >= 0, (x - 11)**2 + (y - 3)**2 - 16 >= 0, x**2 + y**2 - 9 >= 0] is not a valid SymPy expression
When using: solution=smp.solve(eqs[0],vars) to try and solve one equation, it sends sympy into a CPU crushing frenzy and it obviously fails to complete the calculation. I made sure to declare all variables as real so i fail to see why it takes so long to solve.
Would like to understand what I'm missing when it comes to handling multiple inequalities with sympy, and if there is a more optimized faster way to solve Lagrange multiplication I'd love to give it a try
I am new to python and I am working on a finance project to solve a set of equations that enables me to go from par spread to flat spread in terms of CDS.
I have a set of data for the upfront (U) and years (i), where to set the data sample, I name upfront with x and years in y
x = [-0.007,-0.01,-0.009,-0.004,0.005,0.011,0.018,0.027,0.037,0.048]
y = [1,2,3,4,5,6,7,8,9,10]
Here are the 3 equations that I am trying to solve together:
U = A(s(i)-c)
L(i) = 1 - (1 - (s(i) / (1 - R)) ** i) / (1 - (1 / (s(i-1) - R)) ** (i - 1))
A = sum([((1 - L(i)) / (1 + r)) ** j for j in range(1, i+1)])
Detailed explanation:
The goal is to solve and list the results for all 10 values of variable s
1st equation is used to calculate the upfront amount, where s is unknown
2nd equation is used to calculate the hazard rate L, where R is recovery rate, s(i) is the current s term and s(i-1) is the previous s term.
Visual representation of equation 2:
3rd equation is used to calculate the annual risky annuity, the purpose of this equation is to calculate and sum the risk annuities. For example, if i=1, then there should be one term in the equation. If i=2, then there should be 2 terms in the equation where they are summed. This repeats until the 10th iteration where there are 10 values and they are summed.
Visual representation of equation 3:
To attempt to solve the problem, I wrote the following code (which doesn't run yet):
x = [-0.007,-0.01,-0.009,-0.004,0.005,0.011,0.018,0.027,0.037,0.048]
y = [1,2,3,4,5,6,7,8,9,10]
c = 0.01
r = 0.01
R = 0.4
def eqs(s, U, t, c=0.01, r=0.01, R=0.4):
L = 1 - (1 - (s / (1 - R)) ** t) / (1 - (1 / (1 - R)) ** (t - 1))
A = sum([((1 - L) / (1 + r)) ** j for j in range(1, i+1)])
s = (U/A) + c
return L, A, s
for U, t in zip(x, y):
s = fsolve(eq1, 0.01, (U, t,))
print(s, U, t)
Main obstacles:
I haven't found a way where I can make Equation 3 work.
I also haven't been able to pass through 2 sets of values into the for loop that then calls the function
I wasn't able to loop the previous spread value, s(i-1), back into the iteration to compute the next value
I was able to solve it manually on python by changing the third equation every iteration and inputting the previous results
I am hoping I can find some solution to my problem, thank you for your help in advance!
It took me a bit but I think I got it. Your main problem is that you can't code formulas which describe a complex problem, then call a 'magic' fsolve function and hoping that python will solve it for you, without even defining what is the unknown.
It doesn't work that way. You have to make your problem simple enough so that it can be solved with existing functions from some libraries. Python has no form of intelligence nor divination.
As I said in my comments, the fsolve() from scipy.optimise can only solve problems of the form f(x)=0.
If you want to use it, you have to transform your complex problem in a simple f(x)=0. problem.
Starting from your 3rd equation s = (U/A) + c we can deduce that s - (U/A) - c = 0
Given that A is a function of L and L is a function of s, if you define a function f(s)= s - (U/A) - c then s is the solution of f(s)=0.
It is what I did in the following code :
from scipy.optimize import fsolve
def Lambda(s,sold,R,t):
num = (1 - s / (1 - R)) ** t
den = (1 - sold / (1 - R)) ** (t - 1)
return 1-num/den
def Annuity(L,r,Aold,j):
return Aold + ((1 - L) / (1 + r)) ** j
def f(s,U, sold,R,t,r,Aold,j):
L=Lambda(s,sold,R,t)
A=Annuity(L,r,Aold,j)
return s - (U/A) - c
x = [-0.007,-0.01,-0.009,-0.004,0.005,0.011,0.018,0.027,0.037,0.048]
y = [1,2,3,4,5,6,7,8,9,10]
c = 0.01
r = 0.01
R = 0.4
sold=0.
Aold=0.
for n,(U, t) in enumerate(zip(x, y)):
j=n+1
print("j={},U={},t={}".format(j,U,t))
init = 0.01 # The starting estimate for the roots of f(s) = 0.
roots = fsolve(f,init,args=(U, sold,R,t,r,Aold,j))
s=roots[0]
L=Lambda(s,sold,R,t)
A=Annuity(L,r,Aold,j)
print("s={},L={},A={}".format(s,L,A))
print
sold=s
Aold=A
It gives following outputs :
j=1,U=-0.007,t=1
s=0.00289571337037,L=0.00482618895061,A=0.985320604999
j=2,U=-0.01,t=2
s=0.00485464221105,L=0.0113452406083,A=1.94349944361
j=3,U=-0.009,t=3
s=0.00685582655826,L=0.0180633847507,A=2.86243751076
j=4,U=-0.004,t=4
s=0.00892769166807,L=0.0251666093582,A=3.73027037175
j=5,U=0.005,t=5
s=0.0111024600844,L=0.0328696834011,A=4.53531159145
j=6,U=0.011,t=6
s=0.0120640333844,L=0.0280806661972,A=5.32937116379
j=7,U=0.018,t=7
s=0.0129604367831,L=0.0305170484121,A=6.08018387787
j=8,U=0.027,t=8
s=0.0139861021632,L=0.0351929301367,A=6.77353436882
j=9,U=0.037,t=9
s=0.0149883645118,L=0.0382416644539,A=7.41726068981
j=10,U=0.048,t=10
s=0.0159931206639,L=0.041597709395,A=8.00918297693
No idea if it's correct, but it looks likely to me. I guess you got the idea now and you will be able to make some adjustments.
Let's suppose that I have two transcendental functions f(x, y) = 0 and g(a, b) = 0.
a and b depend on y, so if I could solve the first equation analytically for y, y = f(x), I could have the second function depending only on x and thus solving it numerically.
I prefer to use python, but if matlab is able to handle this is ok for me.
Is there a way to solve analytically trascendent functions for a variable with python/matlab? Taylor is fine too, as long as I can choose the order of approximation.
I tried running this through Sympy like so:
import sympy
j, k, m, x, y = sympy.symbols("j k m x y")
eq = sympy.Eq(k * sympy.tan(y) + j * sympy.tan(sympy.asin(sympy.sin(y) / x)), m)
eq.simplify()
which turned your equation into
Eq(m, j*sin(y)/(x*sqrt(1 - sin(y)**2/x**2)) + k*tan(y))
which, after a bit more poking, gives us
k * tan(y) + j * sin(y) / sqrt(x**2 - sin(y)**2) == m
We can find an expression for x(y) like
sympy.solve(eq, x)
which returns
[-sqrt(j**2*sin(y)**2/(k*tan(y) - m)**2 + sin(y)**2),
sqrt(j**2*sin(y)**2/(k*tan(y) - m)**2 + sin(y)**2)]
but an analytic solution for y(x) fails.
First I don't really know Mathematica and I haven't done stats in a very long time.
I have been trying to find (Google and RTFM) a way to reproduce the results produced by the Mathematica LinearModelFit function using scipy.stats.linregress. It is now obvious that this is not the way to go except for the most simple cases.
LinearModelFit[ydata, 1/(2 n - x)^100, x]
produces 16.3766 + <<70>>/(2580 - x)^100
If someone could point me in the right direction I would appreciate it.
Thanks in advance.
data: http://pastebin.com/RTp5em0W
Screen shot of Mathematica Notebook: http://imgur.com/owMg3r8
Note: I did not do the Mathematica work. ddd is the data that can be found at the pastebin link. The y in the denominator should be x.
I don't know the python solution, but one way to handle this problem is to transform your x data according to the functional form you are supplying as the argument to LinearModelFit :
n=1290
LinearModelFit[ydata, 1/(2 n - x)^100, x]["BestFit"]
16.1504 + 1.471945513739138*10^315/(2580 - x)^100
is equivalent to:
xtransform = 1/(2 n - #)^100 & /# Range[Length[ydata]];
LinearModelFit[Transpose[{xtransform, ydata}], x, x]["BestFit"]
16.1504 + 1.471945513739138*10^315 x
You should readily be able to do that transform and use standard linear regression in python. You might have precision issues due to the large exponent however.
A simple algorithm requiring no complex functionality can be made for coding in any language.
The y data is imported.
y = {11.56999969, 14.47999954, ... , 340.730011, 202.1699982, 4054.949951};
Linear regression coefficients a and b are found by solving the normal equations. (See note below for derivation). Once computed they can be reused without needing a solver.
Clear[a, b, n, Σx, Σy, Σxy, Σx2]
Column[{a, b} = Simplify[First[{a, b} /. Solve[{
(* Normal equations for straight line *)
Σy == n a + b Σx,
Σxy == a Σx + b Σx2},
{a, b}]]]]
(Σx Σxy - Σx2 Σy)/(Σx^2 - n Σx2)
(-n Σxy + Σx Σy)/(Σx^2 - n Σx2)
X is linearised to x according to the model.
n = Length[y]
1267
X = Range[n];
x = Map[1/(2 n - #)^100 &, X];
Quantities are computed:
Σx = Sum[x[[i]], {i, n}];
Σy = Sum[y[[i]], {i, n}];
Σxy = Sum[x[[i]]*y[[i]], {i, n}];
Σx2 = Sum[x[[i]]^2, {i, n}];
Implementing the coefficient formulae:
a = (Σx Σxy - Σx2 Σy)/(Σx^2 - n Σx2)
b = (Σx Σy - n Σxy)/(Σx^2 - n Σx2)
16.65767846718208
4.213538401691473*10^313
Plotting the regression line on the linearised data (with scaling).
scaled = 10^340;
Show[ListPlot[Transpose[{x scaled, y}],
PlotRange -> {Automatic, {0, 30}}],
ListPlot[Transpose[{x scaled, Table[a + b i, {i, x}]}],
PlotRange -> All, PlotStyle -> Red]]
Reapplying the model, the least-squares fit is: a + b/(2 n - X)^100
Show[ListPlot[Transpose[{X, y}],
PlotRange -> {Automatic, {0, 400}}],
Plot[a + b/(2 n - X)^100, {X, 0, n},
PlotRange -> {Automatic, {0, 400}}, PlotStyle -> Red]]
This matches the built-in solution from Mathematica shown below.
Also calculating R squared.
(* Least-squares regression of y on x *)
Array[(Y[#] = a + b x[[#]]) &, n];
Array[(e[#] = y[[#]] - Y[#]) &, n];
(* Residual or unexplained sum of squares *)
RSS = Sum[e[i]^2, {i, n}];
(* Total sum of squares in the dependent variable, measured about its mean *)
TSS = (y - Mean[y]).(y - Mean[y]);
(* Coefficient of determination, R^2 *)
R2 = 1 - RSS/TSS
0.230676
Checking with Mathematica's built-in functionality.
Clear[x]
lm = LinearModelFit[y, 1/(2 n - x)^100, x];
lm["BestFit"]
lm["RSquared"]
0.230676
Note on the normal equations
Source: Econometric Methods
I'm new with Python, and started to work on a Coordinate Conversion program. The problem is that I can't find an iterative method to solve one of the expressions.
Expressions:
N = a / math.sqrt(1 - (e2 * (math.sin(phi))**2))
phi = math.atan((Z / math.sqrt((X**2) + (Y**2))) * ((1-e2) * (N / N + hei)**-1))
lam = math.atan(Y / X)
hei = (math.sqrt((X ** 2) + (Y ** 2))) / math.cos(phi)
Here, a and e2 are constants.
The user is supposed to introduce the values of X, Y and Z and obtain phi, lam and hei. But, given that N is a function that depends of phi, it is necessary to create a loop, making hei = 0 in the second equation as an initial value, in order to procure a first approximation for phi. However, I don't know how to end that cycle when phi has reached a certain value (for instance, when 9 or more of the decimals of phi are equal to the previous value in the loop).
You could break the loop based on the difference between the value of phi in the previous loop and that in the current loop, i.e. if the difference is smaller than 10^(-9).