Solving an implicit quadratic system of 3 variables - python

I am trying to solve a system of equations that has 3 variables and a variable number of equations.
Basically, the system is between 5 and 12 equations long, and regardless of how many equations there are, I am trying to solve for 3 variables.
It looks like this:
(x-A)**2 + (y-B)**2 + (z-C)**2 = (c(t-d))**2
I know A,B,C, and the whole right side.
A,B,C and the right side are all arrays of length n, where n varies randomly between 5 and 12. So then we have a system of equations that changes in size.
I believe I need to use numpy's lstsq function and do something like:
data,data1 = getData() # I will have to do this for 2 unique systems.
A = data[:,0]
B = data[:,1]
C = data[:,2]
tid = data[:,3]
P = (x-A)**2 + (y-B)**2 + (z-C)**2
b = tid
solved = lstsq(P,b)
print solved
This however doesn't work, as we know that x,y,z are implicit, and therefore need to be taken out of P in order for this to work.
Help!

What you probably need is scipy.optimize.minimize() which works with arbitrary (nonlinear) equations. numpy.linalg.lstsq() only solves a system of linear equations, and this problem is pretty definitely nonlinear (although there are techniques to linearize systems of equations, I think this is not what you want in this case).
It is likely that a system of >3 equations in 3 variables has no solution, so you have to define how to measure how good a given "solution" is even though it doesn't actually solve the system of equations. How to pose this as a minimization problem depends on the physical or problem-domain interpretation what you are trying to actually do. One possibility is, for the following equations (which are a slightly rearranged version of yours)
(x-A1)**2 + (y-B1)**2 + (z-C1)**2 - T1**2 = 0
(x-A2)**2 + (y-B2)**2 + (z-C2)**2 - T2**2 = 0
...
try to minimize the sum of the absolute values of all the left hand sides (which should be zero if the equation is solved exactly). In other words, you want the x, y, z that produce the minimum of the following function
sum( abs( (x-A1)**2 + (y-B1)**2 + (z-C1)**2 - T1**2 ) + abs( (x-A2)**2 + (y-B2)**2 + (z-C2)**2 - T2**2 ) + ... )
Code example: v is ndarray of (3,) containing x, y, z; and A, B, C, tid are ndarrays of (N,) where N is the number of equations.
def F(v, A, B, C, tid):
x = v[0]
y = v[1]
z = v[2]
return numpy.sum( numpy.abs( (x-A)**2 + (y-B)**2 + (z-C)**2 - tid ) )
v_initial = numpy.array([x0, y0, z0]) # starting guesses
result = scipy.optimize.minimize(F, v_initial, args=(A, B, C, tid))
v = result.x
x, y, z = v.tolist() # the best solution found
This should be close to working but I haven't tested it. You may need some extra arguments to minimize(), for example method, tol, ...

Related

How to solve for an ordinary differential equation variable value using Python

I'm new to the SciPy.org maths libraries, so this may be a fairly basic question for those familiar with them.
For this ODE:
y'(t) - 0.05y(t) = d, y(0) = 10
how do I calculate the value of 'd' if y(10) = 100?
I can solve for y(t) this way:
import sympy as sym
y = sym.Function('y')
t, d = sym.symbols('t d')
y1 = sym.Derivative(y(t), t)
eqdiff = y1 - 0.05*y(t) - d
sol = sym.dsolve(eqdiff, y(t), ics={y(0): '10'})
sol
y(t)= −20.0d + (20.0d + 10.0)e^(0.05t)
Whether "sol" is usable to solve for d when y(10) = 100 is unknown to me (SymPy may not be the library of choice for this).
I've looked at numerous web pages such as these for ideas but haven't found a way forward:
https://docs.sympy.org/latest/modules/solvers/ode.html
Converting sympy expression to numpy expression before solving with fsolve( )
https://apmonitor.com/pdc/index.php/Main/SolveDifferentialEquations
I'm aware there are graphical ways to address the problem, but I want a numeric result.
Thanks in advance for helpful advice.
You can substitute the values and use solve:
In [5]: sol.subs(t, 10)
Out[5]: y(10) = 12.9744254140026⋅d + 16.4872127070013
In [6]: sol.subs(t, 10).subs(y(10), 100)
Out[6]: 100 = 12.9744254140026⋅d + 16.4872127070013
In [7]: solve(sol.subs(t, 10).subs(y(10), 100), d)
Out[7]: [6.43672337141557]
https://docs.sympy.org/latest/modules/solvers/solvers.html#sympy.solvers.solvers.solve
You can also solve it with scipy. The whole task is a boundary value problem with a free parameter, one state dimension plus one free parameter equals two boundary slots. So use solve_bvp (even if it is a scalar problem, the solver treats every state space as vector space)
def eqn(t,y,d): return d+0.05*y
def bc(y0,y10,d): return [ y0[0]-10, y10[0]-100 ]
x_init = [0,10]
y_init = [[10, 100]]
d_init = 1
res = solve_bvp(eqn, bc, x_init, y_init, p=[d_init], tol=1e-5)
print(res.message, f"d={res.p[0]:.10f}")
which gives
The algorithm converged to the desired accuracy. d=6.4367242595

solving differential equation with step function

I am trying to solve this differential equation as part of my assignment. I am not able to understand on how can i put the condition for u in the code. In the code shown below, i arbitrarily provided
u = 5.
2dx(t)dt=−x(t)+u(t)
5dy(t)dt=−y(t)+x(t)
u=2S(t−5)
x(0)=0
y(0)=0
where S(t−5) is a step function that changes from zero to one at t=5. When it is multiplied by two, it changes from zero to two at that same time, t=5.
def model(x,t,u):
dxdt = (-x+u)/2
return dxdt
def model2(y,x,t):
dydt = -(y+x)/5
return dydt
x0 = 0
y0 = 0
u = 5
t = np.linspace(0,40)
x = odeint(model,x0,t,args=(u,))
y = odeint(model2,y0,t,args=(u,))
plt.plot(t,x,'r-')
plt.plot(t,y,'b*')
plt.show()
I do not know the SciPy Library very well, but regarding the example in the documentation I would try something like this:
def model(x, t, K, PT)
"""
The model consists of the state x in R^2, the time in R and the two
parameters K and PT regarding the input u as step function, where K
is the infimum of u and PT is the delay of the step.
"""
x1, x2 = x # Split the state into two variables
u = K if t>=PT else 0 # This is the system input
# Here comes the differential equation in vectorized form
dx = [(-x1 + u)/2,
(-x2 + x1)/5]
return dx
x0 = [0, 0]
K = 2
PT = 5
t = np.linspace(0,40)
x = odeint(model, x0, t, args=(K, PT))
plt.plot(t, x[:, 0], 'r-')
plt.plot(t, x[:, 1], 'b*')
plt.show()
You have a couple of issues here, and the step function is only a small part of it. You can define a step function with a simple lambda and then simply capture it from the outer scope without even passing it to your function. Because sometimes that won't be the case, we'll be explicit and pass it.
Your next problem is the order of arguments in the function to integrate. As per the docs (y,t,...). Ie, First the function, then the time vector, then the other args arguments. So for the first part we get:
u = lambda t : 2 if t>5 else 0
def model(x,t,u):
dxdt = (-x+u(t))/2
return dxdt
x0 = 0
y0 = 0
t = np.linspace(0,40)
x = odeint(model,x0,t,args=(u,))
Moving to the next part, the trouble is, you can't feed x as an arg to y because it's a vector of values for x(t) for particular times and so y+x doesn't make sense in the function as you wrote it. You can follow your intuition from math class if you pass an x function instead of the x values. Doing so requires that you interpolate the x values using the specific time values you are interested in (which scipy can handle, no problem):
from scipy.interpolate import interp1d
xfunc = interp1d(t.flatten(),x.flatten(),fill_value="extrapolate")
#flatten cuz the shape is off , extrapolate because odeint will go out of bounds
def model2(y,t,x):
dydt = -(y+x(t))/5
return dydt
y = odeint(model2,y0,t,args=(xfunc,))
Then you get:
#Sven's answer is more idiomatic for vector programming like scipy/numpy. But I hope my answer provides a clearer path from what you know already to a working solution.

Partial symbolic derivative in Python

I need to partially derivate my equation and form a matrix out of the derivatives. My equation is:
While this conditions must be met:
For doing this I've used the sympy module and its diff() function. My code so far is:
from sympy import*
import numpy as np
init_printing() #delete if you dont have LaTeX installed
logt_r, logt_a, T, T_a, a_0, a_1, a_2, logS, Taa_0, Taa_1, Taa_2 = symbols('logt_r, logt_a, T, T_a, a_0, a_1, a_2, logS, Taa_0, Taa_1, Taa_2')
A = (logt_r - logt_a - (T - T_a) * (a_0 + a_1 * logS + a_2 * logS**2) )**2
parametri = [logt_a, a_0, Taa_0, a_1, Taa_1, a_2, Taa_2]
M = expand(A)
M = M.subs(T_a*a_0, Taa_0)
M = M.subs(T_a*a_1, Taa_1)
M = M.subs(T_a*a_2, Taa_2)
K = zeros(len(parametri), len(parametri))
O = []
def odv(par):
for j in range(len(par)):
for i in range(len(par)):
P = diff(M, par[i])/2
B = P.coeff(par[j])
K[i,j] = B
return K
odv(parametri)
My result:
My problem
The problem that I'm having is in the partial derivatives of products (T_aa_0, T_aa_1 and T_a*a_2), because by using the diff() function, you cannot derivate a function with a product (obviously), else you get an error:
ValueError:
Can't calculate 1-th derivative wrt T_a*a_0.
To solve this I substitued this products with coefficients, like:
M = M.subs(T_a*a_0, Taa_0)
M = M.subs(T_a*a_1, Taa_1)
M = M.subs(T_a*a_2, Taa_2)
But as you can see in the final result, this works only in some cases. I would like to know if there is a better way of doing this where I wouldn't need to substitude the products and that it would work in all cases.
ADDITIONAL INFORMATION
Let me rephrase my question. Is it possible to symbolically derive an equation with a function by using python or in that matter, to use the sympy module?
So I've managed to solve my problem on my own. The main question was how to symbolically derive a function or equation with another function. As I've gone again slowly over the sympy documentation, I saw a little detail, that I've missed before.
In order to derive a function with a function you need to change the settings of the function, that will be used to derive. For example:
x, y, z = symbols('x, y, z')
A = x*y*z
B = x*y
# This is the detail:
type(B)._diff_wrt = True
diff(A, B)
Or in my case, the code looks like:
koef = [logt_a, a_0, T_a*a_0, a_1, T_a*a_1, a_2, T_a*a_2]
M = expand(A)
K = zeros(len(koef), len(koef))
def odvod_mat(par):
for j in range(len(par)):
for i in range(len(par)):
type(par[i])._diff_wrt = True
P = diff(M, par[i])/2
B = P.coeff(par[j])
K[i,j] = B
#Removal of T_a
K[i,j] = K[i,j].subs(T_a, 0)
return K
odvod_mat(koef)
Thanks again to all that were taking their time to read this. I hope this helps to anyone, who will have the same problem as I did.

Solving a system of odes (with changing constant!) using scipy.integrate.odeint?

I currently have a system of odes with a time-dependent constant. E.g.
def fun(u, t, a, b, c):
x = u[0]
y = u[1]
z = u[2]
dx_dt = a * x + y * z
dy_dt = b * (y-z)
dz_dt = -x*y+c*y-z
return [dx_dt, dy_dt, dz_dt]
The constants are "a", "b" and "c". I currently have a list of "a"s for every time-step which I would like to insert at every time-step, when using the scipy ode solver...is this possible?
Thanks!
Yes, this is possible. In the case where a is constant, I guess you called scipy.integrate.odeint(fun, u0, t, args) where fun is defined as in your question, u0 = [x0, y0, z0] is the initial condition, t is a sequence of time points for which to solve for the ODE and args = (a, b, c) are the extra arguments to pass to fun.
In the case where a depends on time, you simply have to reconsider a as a function, for example (given a constant a0):
def a(t):
return a0 * t
Then you will have to modify fun which computes the derivative at each time step to take the previous change into account:
def fun(u, t, a, b, c):
x = u[0]
y = u[1]
z = u[2]
dx_dt = a(t) * x + y * z # A change on this line: a -> a(t)
dy_dt = b * (y - z)
dz_dt = - x * y + c * y - z
return [dx_dt, dy_dt, dz_dt]
Eventually, note that u0, t and args remain unchanged and you can again call scipy.integrate.odeint(fun, u0, t, args).
A word about the correctness of this approach. The performance of the approximation of the numerical integration is affected, I don't know precisely how (no theoretical guarantees) but here is a simple example which works:
import matplotlib.pyplot as plt
import numpy as np
import scipy as sp
import scipy.integrate
tmax = 10.0
def a(t):
if t < tmax / 2.0:
return ((tmax / 2.0) - t) / (tmax / 2.0)
else:
return 1.0
def func(x, t, a):
return - (x - a(t))
x0 = 0.8
t = np.linspace(0.0, tmax, 1000)
args = (a,)
y = sp.integrate.odeint(func, x0, t, args)
fig = plt.figure()
ax = fig.add_subplot(111)
h1, = ax.plot(t, y)
h2, = ax.plot(t, [a(s) for s in t])
ax.legend([h1, h2], ["y", "a"])
ax.set_xlabel("t")
ax.grid()
plt.show()
I Hope this will help you.
No, that is not possible in the literal sense of
"I currently have a list of "a"s for every time-step which I would like to insert at every time-step"
as the solver has adaptive step size control, that is, it will use internal time steps that you have no control over, and each time step uses several evaluations of the function. Thus there is no connection between the solver time steps and the data time steps.
In the extended sense that the given data defines a piecewise constant step function however, there are several approaches to get to a solution.
You can integrate from jump point to jump point, using the ODE function with the constant parameter for this time segment. After that use numpy array operations like concatenate to assemble the full solution.
You can use interpolation functions like numpy.interp or scipy.interpolate.interp1d. The first gives a piecewise linear interpolation, which may not be desired here. The second returns a function object that can be configured to be a "zero-order hold", which is a piecewise constant step function.
You could implement your own logic to go from the time t to the correct values of those parameters. This mostly applies if there is some structure to the data, for instance, if they have the form f(int(t/h)).
Note that the approximation order of the numerical integration is not only bounded by the order of the RK (solve_ivp) or multi-step (odeint) method, but also by the differentiability order of the (parts of) the differential equation. If the ODE is much less smooth than the order of the method, the implicit assumptions for the step size control mechanism are violated, which may result in a very small step size requiring a huge number of integration steps.
I also encountered similar problem. In my case, parameters a, b, and c are not in direct function with time, but determined by x, y, and z at that time. So I have to get x, y, z at time t, and calculate a, b, c for the integration calculation for x, y, z at t+dt. It turns out that if I change dt value, the whole integration result will change dramatically, even to something unreasonable.

two dimensional fit with python

I need to fit a function
z(u,v) = C u v^p
That is, I have a two-dimensional data set, and I have to find two parameters, C and p. Is there something in numpy or scipy that can do this in a straightforward manner? I took a look at scipy.optimize.leastsq, but it's not clear to me how I would use it here.
def f(x,u,v,z_data):
C = x[0]
p = x[1]
modelled_z = C*u*v**p
diffs = modelled_z - z_data
return diffs.flatten() # it expects a 1D array out.
# it doesn't matter that it's conceptually 2D, provided flatten it consistently
result = scipy.optimize.leastsq(f,[1.0,1.0], # initial guess at starting point
args = (u,v,z_data) # alternatively you can do this with closure variables in f if you like
)
# result is the best fit point
For your specific function you might be able to do it better - for example, for any given value of p there is one best value of C that can be determined by straightforward linear algebra.
You can transform the problem into a simple linear least squares problem, and then you don't need leastsq() at all.
z[i] == C * u[i] * v[i]**p
becomes
z[i]/u[i] == C * v[i]**p
And then
log(z[i]/u[i]) == log(C) + p * log(v[i])
Change variables and you can solve as a simple linear problem:
Z[i] == L + p * V[i]
Using numpy and assuming you have the data in arrays z, u and v, this is rendered as:
Z = log(z/u)
V = log(v)
p, L = np.polyfit(V, Z, 1)
C = exp(L)
You probably ought to put a try: and except: around it in case some of the u values are zero or there are negative values.

Categories