I have the following set of equations, and I want to solve them simultaneously for X and Y. I've been advised that I could use numpy to solve these as a system of linear equations. Is that the best option, or is there a better way?
a = (((f * X) + (f2 * X3 )) / (1 + (f * X) + (f2 * X3 ))) * i
b = ((f2 * X3 ) / (1 + (f * X) + (f2 * X3))) * i
c = ((f * X) / (1 + (j * X) + (k * Y))) * i
d = ((k * Y) / (1 + (j * X) + (k * Y))) * i
f = 0.0001
i = 0.001
j = 0.0001
k = 0.001
e = 0 = X + a + b + c
g = 0.0001 = Y + d
h = i - a
As noted by Joe, this is actually a system of nonlinear equations. You are going to need more firepower than numpy alone provides.
Solution of nonlinear equations is tricky, and the typical approach is to define an objective function
F(z) = sum( e[n]^2, n=1...13 )
where z is a vector containing a value for each of your 13 variables a,b,c,d,e,f,g,h,i,X,Y and e[n] is the amount by which each of your 13 equations is violated. For example
e[3] = (d - ((k * Y) / (1 + (j * X) + (k * Y))) * i )
Once you have that objective function, then you can apply a nonlinear solver to try to find a z for which F(z)=0. That of course corresponds to a solution to your equations.
Commonly used solvers include:
The Solver in Microsoft Excel
The python library scipy.optimize
Fitting routines in the Gnu Scientific Library
Matlab's optimization toolbox
Note that all of them will work far better if you first alter your set of equations to eliminate as many variables as practical before trying to run the solver (e.g. by substituting for k wherever it is found). The reduced dimensionality makes a big difference.
Related
I am trying to solve an equation but the solve() function is taking over 10 minutes even on a high RAM colab notebook. Are there any simplifications to the problem that I can take to speed this along? Here is the code:
x, y, x_0, y_0, x_new, y_new, t, f = symbols('x y x_0 y_0 x_new y_new t f')
D = (2 * (1 - t) * sqrt(x * y) + t * (x + y)) / (2 * (x + y) * sqrt(x * y))
D_old = D.subs([(x, x_0), (y, y_0)])
D_new = D.subs([(x, x_new), (y, y_new)])
delta_D = D_new - D_old
target = Eq(delta_D, f)
answer = solve(target, x_new)
If it is taking a long time you must be trying to solve for one of the x or y values. This will require solving a messy cubic equation in many variables. It would be better if you just substituted in the values of interest and then used nsolve to find the roots of interest. Otherwise, you can get a symbolic solution to the generic cubic g3 = solve(a*x**3 + b*x**2 + c*x + d, x) and then substitute in the corresponding expressions for the coefficients of collect(sympy.solvers.solvers.unrad(target.rewrite(Add))[0], v) where v is the variable of interest. But I won't bog this down with more details until it is clear what you really want to do.
I have a second-order linear ODE y'' + (w/c)^2*(cos(cos(2pix/d) + q0)*y=0. I need to get exact and numerical solution. Firstly, i tried to solve it with sympy.dsolve with this code:
c = 1
w = 1.5
d = 1
q0 = 2
wc = (w / c)**2
x = symbols('x')
y = Function('y')
equation = Eq(y(x).diff(x, x) + wc * (cos(2 * pi * x / d) + q0) * y(x), 0)
y_x = dsolve(equation)
y_x
but when compiling jupyter said: "The kernel has died. It will restart automatically". I already did 2 ODEs with this method but there weren't cos(kx) and there were initial conditions in equations I solved. I guess I have a problem with that but I don't know how to fix this.
Background.
I'm attempting to write a python implementation of this answer over on Math SE. You may find the following background to be useful.
Problem
I have an experimental setup consisting of three (3) receivers, with known locations [xi, yi, zi], and a transmitter with unknown location [x,y,z] emitting a signal at known velocity v. This signal arrives at the receivers at known times ti. The time of emission, t, is unknown.
I wish to find the angle of arrival (i.e. the transmitter's polar coordinates theta and phi), given only this information.
Solution
It is not possible to locate the transmitter exactly with only three (3) receivers, except in a handful of unique cases (there are several great answers across Math SE explaining why this is the case). In general, at least four (and, in practice, >>4) receivers are required to uniquely determine the rectangular coordinates of the transmitter.
The direction to the transmitter, however, may be "reliably" estimated. Letting vi be the vector representing the location of receiver i, ti being the time of signal arrival at receiver i, and n be the vector representing the unit vector pointing in the (approximate) direction of the transmitter, we obtain the following equations:
<n, vj - vi> = v(ti - tj)
(where < > denotes the scalar product)
...for all pairs of indices i,j. Together with |n| = 1, the system has 2 solutions in general, symmetric by reflection in the plane through vi/vj/vk. We may then determine phi and theta by simply writing n in polar coordinates.
Implementation.
I've attempted to write a python implementation of the above solution, using scipy's fsolve.
from dataclasses import dataclass
import scipy.optimize
import random
import math
c = 299792
#dataclass
class Vertexer:
roc: list
def fun(self, var, dat):
(x,y,z) = var
eqn_0 = (x * (self.roc[0][0] - self.roc[1][0])) + (y * (self.roc[0][1] - self.roc[1][1])) + (z * (self.roc[0][2] - self.roc[1][2])) - c * (dat[1] - dat[0])
eqn_1 = (x * (self.roc[0][0] - self.roc[2][0])) + (y * (self.roc[0][1] - self.roc[2][1])) + (z * (self.roc[0][2] - self.roc[2][2])) - c * (dat[2] - dat[0])
eqn_2 = (x * (self.roc[1][0] - self.roc[2][0])) + (y * (self.roc[1][1] - self.roc[2][1])) + (z * (self.roc[1][2] - self.roc[2][2])) - c * (dat[2] - dat[1])
norm = math.sqrt(x**2 + y**2 + z**2) - 1
return [eqn_0, eqn_1, eqn_2, norm]
def find(self, dat):
result = scipy.optimize.fsolve(self.fun, (0,0,0), args=dat)
print('Solution ', result)
# Crude code to simulate a source, receivers at random locations
x0 = random.randrange(0,50); y0 = random.randrange(0,50); z0 = random.randrange(0,50)
x1 = random.randrange(0,50); x2 = random.randrange(0,50); x3 = random.randrange(0,50);
y1 = random.randrange(0,50); y2 = random.randrange(0,50); y3 = random.randrange(0,50);
z1 = random.randrange(0,50); z2 = random.randrange(0,50); z3 = random.randrange(0,50);
t1 = math.sqrt((x0-x1)**2 + (y0-y1)**2 + (z0-z1)**2)/c
t2 = math.sqrt((x0-x2)**2 + (y0-y2)**2 + (z0-z2)**2)/c
t3 = math.sqrt((x0-x3)**2 + (y0-y3)**2 + (z0-z3)**2)/c
print('Actual coordinates ', x0,y0,z0)
myVertexer = Vertexer([[x1,y1,z1], [x2,y2,z2], [x3,y3,z3]])
myVertexer.find([t1,t2,t3])
Unfortunately, I have far more experience solving such problems in C/C++ using GSL, and have limited experience working with scipy and the like. I'm getting the error:
TypeError: fsolve: there is a mismatch between the input and output shape of the 'func' argument 'fun'.Shape should be (3,) but it is (4,).
...which seems to suggest that fsolve expects a square system.
How may I solve this rectangular system? I can't seem to find anything useful in the scipy docs.
If necessary, I'm open to using other (Python) libraries.
As you already mentioned, fsolve expects a system with N variables and N equations, i.e. it finds a root of the function F: R^N -> R^N. Since you have four equations, you simply need to add a fourth variable. Note also that fsolve is a legacy function, and it's recommended to use root instead. Last but not least, note that sqrt(x^2+y^2+z^2) = 1 is equivalent to x^2+y^2+z^2=1 and that the latter is much less susceptible to rounding errors caused by the finite differences when approximating the jacobian of F.
Long story short, your class should look like this:
from scipy.optimize import root
#dataclass
class Vertexer:
roc: list
def fun(self, var, dat):
x,y,z, *_ = var
eqn_0 = (x * (self.roc[0][0] - self.roc[1][0])) + (y * (self.roc[0][1] - self.roc[1][1])) + (z * (self.roc[0][2] - self.roc[1][2])) - c * (dat[1] - dat[0])
eqn_1 = (x * (self.roc[0][0] - self.roc[2][0])) + (y * (self.roc[0][1] - self.roc[2][1])) + (z * (self.roc[0][2] - self.roc[2][2])) - c * (dat[2] - dat[0])
eqn_2 = (x * (self.roc[1][0] - self.roc[2][0])) + (y * (self.roc[1][1] - self.roc[2][1])) + (z * (self.roc[1][2] - self.roc[2][2])) - c * (dat[2] - dat[1])
norm = x**2 + y**2 + z**2 - 1
return [eqn_0, eqn_1, eqn_2, norm]
def find(self, dat):
result = root(self.fun, (0,0,0,0), args=dat)
if result.success:
print('Solution ', result.x[:3])
Let (0,0) and (Xo,Yo) be two points on a Cartesian plane. We want to determine the parabolic curve, Y = AX^2 + BX + C, which passes from these two points and has a given arc length equal to S. Obviously, S > sqrt(Xo^2 + Yo^2). As the curve must pass from (0,0), it should be C=0. Hence, the curve equation reduces to: Y = AX^2 + BX. How can I determine {A,B} knowing {Xo,Yo,S}? There are two solutions, I want the one with A>0.
I have an analytical solution (complex) that gives S for a given set of {A,B,Xo,Yo}, though here the problem is inverted... I can proceed by solving numerically a complex system of equations... but perhaps there is a numerical routine out there that does exactly this?
Any useful Python library? Other ideas?
Thanks a lot :-)
Note that the arc length (line integral) of the quadratic a*x0^2 + b*x0 is given by the integral of sqrt(1 + (2ax + b)^2) from x = 0 to x = x0. On solving the integral, the value of the integral is obtained as 0.5 * (I(u) - I(l)) / a, where u = 2ax0 + b; l = b; and I(t) = 0.5 * (t * sqrt(1 + t^2) + log(t + sqrt(1 + t^2)), the integral of sqrt(1 + t^2).
Since y0 = a * x0^2 + b * x0, b = y0/x0 - a*x0. Substituting the value of b in u and l, u = y0/x0 + a*x0, l = y0/x0 - a*x0. Substituting u and l in the solution of the line integral (arc length), we get the arc length as a function of a:
s(a) = 0.5 * (I(y0/x0 + a*x0) - I(y0/x0 - a*x0)) / a
Now that we have the arc length as a function of a, we simply need to find the value of a for which s(a) = S. This is where my favorite root-finding algorithm, the Newton-Raphson method, comes into play yet again.
The working algorithm for the Newton-Raphson method of finding roots is as follows:
For a function f(x) whose root is to be obtained, if x(i) is the ith guess for the root,
x(i+1) = x(i) - f(x(i)) / f'(x(i))
Where f'(x) is the derivative of f(x). This process is continued till the difference between two consecutive guesses is very small.
In our case, f(a) = s(a) - S and f'(a) = s'(a). By simple application of the chain rule and the quotient rule,
s'(a) = 0.5 * (a*x0 * (I'(u) + I'(l)) + I(l) - I(u)) / (a^2)
Where I'(t) = sqrt(1 + t^2).
The only problem that remains is calculating a good initial guess. Due to the nature of the graph of s(a), the function is an excellent candidate for the Newton-Raphson method, and an initial guess of y0 / x0 converges to the solution in about 5-6 iterations for a tolerance/epsilon of 1e-10.
Once the value of a is found, b is simply y0/x0 - a*x0.
Putting this into code:
def find_coeff(x0, y0, s0):
def dI(t):
return sqrt(1 + t*t)
def I(t):
rt = sqrt(1 + t*t)
return 0.5 * (t * rt + log(t + rt))
def s(a):
u = y0/x0 + a*x0
l = y0/x0 - a*x0
return 0.5 * (I(u) - I(l)) / a
def ds(a):
u = y0/x0 + a*x0
l = y0/x0 - a*x0
return 0.5 * (a*x0 * (dI(u) + dI(l)) + I(l) - I(u)) / (a*a)
N = 1000
EPSILON = 1e-10
guess = y0 / x0
for i in range(N):
dguess = (s(guess) - s0) / ds(guess)
guess -= dguess
if abs(dguess) <= EPSILON:
print("Break:", abs((s(guess) - s0)))
break
print(i+1, ":", guess)
a = guess
b = y0/x0 - a*x0
print(a, b, s(a))
Run the example on CodeSkulptor.
Note that due to the rational approximation of the arc lengths given as input to the function in the examples, the coefficients obtained may ever so slightly differ from the expected values.
Now I face some problem when I use scipy.integrate.ode.
I want to use spectral method (fourier transform) solve a PDE including dispersive and convection term, such as
du/dt = A * d^3 u / dx^3 + C * du/dx
Then from fourier transform this PDE will convert to a set of ODEs in complex space (uk is complex vector)
duk/dt = (A * coeff^3 + C * coeff) * uk
coeff = (2 * pi * i * k) / L
k is wavenumber, (e.g.. k = 0, 1, 2, 3, -4, -3, -2, -1)
i^2 = -1,
L is length of domain.
When I use r = ode(uODE).set_integrator('zvode', method='adams'), python will warn like:
c ZVODE-- At current T (=R1), MXSTEP (=I1) steps
taken on this call before reaching TOUT
In above message, I1 = 500
In above message, R1 = 0.2191432098050D+00
I feel it is because the time step I chosen is too large, however I cannot decrease time step as every step is time consuming for my real problem. Do I have any other way to resolve this problem?
Did you consider solving the ODEs symbolically? With Sympy you can type
import sympy as sy
sy.init_printing() # use IPython for better results
from sympy.abc import A, C, c, x, t # variables
u = sy.Function(b'u')(x,t)
eq = sy.Eq(u.diff(t), c*u)
sl1 = sy.pde.pdsolve(eq, u)
print("The solution of:")
sy.pprint(eq)
print("was determined to be:")
sy.pprint(sl1)
print("")
print("Substituting the coefficient:")
k,L = sy.symbols("k L", real=True)
coeff = (2 * sy.pi * sy.I * k) / L
cc = (A * coeff**3 + C * coeff)
sl2 = sy.simplify(sl1.replace(c, cc))
sy.pprint(sl2)
gives the following output:
The solution of:
∂
──(u(x, t)) = c⋅u(x, t)
∂t
was determined to be:
c⋅t
u(x, t) = F(x)⋅ℯ
Substituting the coefficient:
⎛ 2 2 2⎞
-2⋅ⅈ⋅π⋅k⋅t⋅⎝4⋅π ⋅A⋅k - C⋅L ⎠
──────────────────────────────
3
L
u(x, t) = F(x)⋅ℯ
Note that F(x) depends on your initial values of u(x,t=0), which you need to provide.
Use sl2.rhs.evalf() to substitute in numbers.