Partial symbolic derivative in Python - python

I need to partially derivate my equation and form a matrix out of the derivatives. My equation is:
While this conditions must be met:
For doing this I've used the sympy module and its diff() function. My code so far is:
from sympy import*
import numpy as np
init_printing() #delete if you dont have LaTeX installed
logt_r, logt_a, T, T_a, a_0, a_1, a_2, logS, Taa_0, Taa_1, Taa_2 = symbols('logt_r, logt_a, T, T_a, a_0, a_1, a_2, logS, Taa_0, Taa_1, Taa_2')
A = (logt_r - logt_a - (T - T_a) * (a_0 + a_1 * logS + a_2 * logS**2) )**2
parametri = [logt_a, a_0, Taa_0, a_1, Taa_1, a_2, Taa_2]
M = expand(A)
M = M.subs(T_a*a_0, Taa_0)
M = M.subs(T_a*a_1, Taa_1)
M = M.subs(T_a*a_2, Taa_2)
K = zeros(len(parametri), len(parametri))
O = []
def odv(par):
for j in range(len(par)):
for i in range(len(par)):
P = diff(M, par[i])/2
B = P.coeff(par[j])
K[i,j] = B
return K
odv(parametri)
My result:
My problem
The problem that I'm having is in the partial derivatives of products (T_aa_0, T_aa_1 and T_a*a_2), because by using the diff() function, you cannot derivate a function with a product (obviously), else you get an error:
ValueError:
Can't calculate 1-th derivative wrt T_a*a_0.
To solve this I substitued this products with coefficients, like:
M = M.subs(T_a*a_0, Taa_0)
M = M.subs(T_a*a_1, Taa_1)
M = M.subs(T_a*a_2, Taa_2)
But as you can see in the final result, this works only in some cases. I would like to know if there is a better way of doing this where I wouldn't need to substitude the products and that it would work in all cases.
ADDITIONAL INFORMATION
Let me rephrase my question. Is it possible to symbolically derive an equation with a function by using python or in that matter, to use the sympy module?

So I've managed to solve my problem on my own. The main question was how to symbolically derive a function or equation with another function. As I've gone again slowly over the sympy documentation, I saw a little detail, that I've missed before.
In order to derive a function with a function you need to change the settings of the function, that will be used to derive. For example:
x, y, z = symbols('x, y, z')
A = x*y*z
B = x*y
# This is the detail:
type(B)._diff_wrt = True
diff(A, B)
Or in my case, the code looks like:
koef = [logt_a, a_0, T_a*a_0, a_1, T_a*a_1, a_2, T_a*a_2]
M = expand(A)
K = zeros(len(koef), len(koef))
def odvod_mat(par):
for j in range(len(par)):
for i in range(len(par)):
type(par[i])._diff_wrt = True
P = diff(M, par[i])/2
B = P.coeff(par[j])
K[i,j] = B
#Removal of T_a
K[i,j] = K[i,j].subs(T_a, 0)
return K
odvod_mat(koef)
Thanks again to all that were taking their time to read this. I hope this helps to anyone, who will have the same problem as I did.

Related

Solving ill-posed non-linear equations numerically in python/SymPy

I'm trying to get a solution by running the code below.
Python just "hangs" and won't find a numeric solution. I can use an app on my phone (Desmos) to graph the functions and find a numeric solution easily, 0.024. Does python have limitations when solving for 2 decimal places?
import sympy
x = sympy.symbols('x')
e_1 = x**-0.5
e_2 = -2*sympy.log(0.0001*3.7**-1*0.05**-1+2.51*350000**-1*x**-0.5, 10)
sol = sympy.solve(e_2 - e_1, x, 0.024)
num = float(sol[0])
print(num)
Usually, nsolve is the SymPy tool used to numerically solve an equation (or a system of equations). However, I wasn't able to use: it kept raising errors. The problem is that your function is defined on a very small region, and the zero is very close to the boundary:
So, in this case we can try numerical root finding techniques. Again, some tools might fail, but after a few tries I've found that bisect works fine:
from sympy import *
from scipy.optimize import root, brentq, bisect
x = symbols('x')
# you didn't provide the diameter, so I've computed it based on your expected solution
d = 1.56843878182221
e_1 = x**-0.5
e_2 = -2 * log(0.00013 * 7-1*d-1+2.51350000**-1*x**-0.5, 10)
# convert the symbolic expression to a numerical function
ff = lambdify(x, e_1 - e_2)
root, output = bisect(ff, 0.023, 0.025, full_output=True)
print(root)
# 0.024000000001862646
print(output)
# converged: True
# flag: 'converged'
# function_calls: 32
# iterations: 30
# root: 0.024000000001862646
The fixed point method is a great one to use for situations like this. Or at least the principles of transforming the equation into a compatible form can benefit standard solvers by providing a less ill-behaved form of the function.
You have an ill-defined equation in the form y - g(y) where y = 1/sqrt(x). So let's get the inverse of g (call it G) so we can solve G(y) - G(g(y)) = G(y) - y instead.
>>> g = e_2.subs(1/x**.5, y)
>>> d = Dummy()
>>> G = solve(g - d, y)[0].subs(d, y)
>>> nsolve(G - y, 6)
6.45497224367903
>>> solve(1/x**.5 - _, dict=True)
{x: 0.024}
The process of rearranging an equation f(x) into form x - g(x) could probably use a built-in method in SymPy, but it's not too hard to do this by replacing each x with a dummy variable, solving for it, and then replacing the dummy symbols with x again. Different g will be more favorable for finding different roots as can be seen in the example below where the purple dashed line is good for finding the root near 1 while the solid blue is better near the smaller root.
Here is a possibility for a "fixed point form" function:
def fixedpoint_Eqs(eq, x=None):
"""rearrange to give eq in form x = g(x)"""
f = eq.free_symbols
fp = []
if x is None:
assert len(f) == 1, 'must specify x in this case'
x = list(f)[0]
Xeq = eq.replace(lambda _:_ == x, lambda _:Dummy())
X = Xeq.free_symbols - f
reps = {xi: x for xi in X}
for xi in X:
try:
g = solve(Xeq, xi, dict=True)
if len(g) != 1:
raise NotImplementedError
fp.append(Eq(x, g[0][xi].xreplace(reps)))
except NotImplementedError:
pass
return fp
>>> fixedpoint_Eqs(x+exp(x)+1/x-5)
Eq(x, -1/(x + exp(x) - 5))
Eq(x, -exp(x) + 5 - 1/x)
Eq(x, log(-x + 5 - 1/x))

solving differential equation with step function

I am trying to solve this differential equation as part of my assignment. I am not able to understand on how can i put the condition for u in the code. In the code shown below, i arbitrarily provided
u = 5.
2dx(t)dt=−x(t)+u(t)
5dy(t)dt=−y(t)+x(t)
u=2S(t−5)
x(0)=0
y(0)=0
where S(t−5) is a step function that changes from zero to one at t=5. When it is multiplied by two, it changes from zero to two at that same time, t=5.
def model(x,t,u):
dxdt = (-x+u)/2
return dxdt
def model2(y,x,t):
dydt = -(y+x)/5
return dydt
x0 = 0
y0 = 0
u = 5
t = np.linspace(0,40)
x = odeint(model,x0,t,args=(u,))
y = odeint(model2,y0,t,args=(u,))
plt.plot(t,x,'r-')
plt.plot(t,y,'b*')
plt.show()
I do not know the SciPy Library very well, but regarding the example in the documentation I would try something like this:
def model(x, t, K, PT)
"""
The model consists of the state x in R^2, the time in R and the two
parameters K and PT regarding the input u as step function, where K
is the infimum of u and PT is the delay of the step.
"""
x1, x2 = x # Split the state into two variables
u = K if t>=PT else 0 # This is the system input
# Here comes the differential equation in vectorized form
dx = [(-x1 + u)/2,
(-x2 + x1)/5]
return dx
x0 = [0, 0]
K = 2
PT = 5
t = np.linspace(0,40)
x = odeint(model, x0, t, args=(K, PT))
plt.plot(t, x[:, 0], 'r-')
plt.plot(t, x[:, 1], 'b*')
plt.show()
You have a couple of issues here, and the step function is only a small part of it. You can define a step function with a simple lambda and then simply capture it from the outer scope without even passing it to your function. Because sometimes that won't be the case, we'll be explicit and pass it.
Your next problem is the order of arguments in the function to integrate. As per the docs (y,t,...). Ie, First the function, then the time vector, then the other args arguments. So for the first part we get:
u = lambda t : 2 if t>5 else 0
def model(x,t,u):
dxdt = (-x+u(t))/2
return dxdt
x0 = 0
y0 = 0
t = np.linspace(0,40)
x = odeint(model,x0,t,args=(u,))
Moving to the next part, the trouble is, you can't feed x as an arg to y because it's a vector of values for x(t) for particular times and so y+x doesn't make sense in the function as you wrote it. You can follow your intuition from math class if you pass an x function instead of the x values. Doing so requires that you interpolate the x values using the specific time values you are interested in (which scipy can handle, no problem):
from scipy.interpolate import interp1d
xfunc = interp1d(t.flatten(),x.flatten(),fill_value="extrapolate")
#flatten cuz the shape is off , extrapolate because odeint will go out of bounds
def model2(y,t,x):
dydt = -(y+x(t))/5
return dydt
y = odeint(model2,y0,t,args=(xfunc,))
Then you get:
#Sven's answer is more idiomatic for vector programming like scipy/numpy. But I hope my answer provides a clearer path from what you know already to a working solution.

two dimensional fit with python

I need to fit a function
z(u,v) = C u v^p
That is, I have a two-dimensional data set, and I have to find two parameters, C and p. Is there something in numpy or scipy that can do this in a straightforward manner? I took a look at scipy.optimize.leastsq, but it's not clear to me how I would use it here.
def f(x,u,v,z_data):
C = x[0]
p = x[1]
modelled_z = C*u*v**p
diffs = modelled_z - z_data
return diffs.flatten() # it expects a 1D array out.
# it doesn't matter that it's conceptually 2D, provided flatten it consistently
result = scipy.optimize.leastsq(f,[1.0,1.0], # initial guess at starting point
args = (u,v,z_data) # alternatively you can do this with closure variables in f if you like
)
# result is the best fit point
For your specific function you might be able to do it better - for example, for any given value of p there is one best value of C that can be determined by straightforward linear algebra.
You can transform the problem into a simple linear least squares problem, and then you don't need leastsq() at all.
z[i] == C * u[i] * v[i]**p
becomes
z[i]/u[i] == C * v[i]**p
And then
log(z[i]/u[i]) == log(C) + p * log(v[i])
Change variables and you can solve as a simple linear problem:
Z[i] == L + p * V[i]
Using numpy and assuming you have the data in arrays z, u and v, this is rendered as:
Z = log(z/u)
V = log(v)
p, L = np.polyfit(V, Z, 1)
C = exp(L)
You probably ought to put a try: and except: around it in case some of the u values are zero or there are negative values.

Solve an implicit ODE (differential algebraic equation DAE)

I'm trying to solve a second order ODE using odeint from scipy. The issue I'm having is the function is implicitly coupled to the second order term, as seen in the simplified snippet (please ignore the pretend physics of the example):
import numpy as np
from scipy.integrate import odeint
def integral(y,t,F_l,mass):
dydt = np.zeros_like(y)
x, v = y
F_r = (((1-a)/3)**2 + (2*(1+a)/3)**2) * v # 'a' implicit
a = (F_l - F_r)/mass
dydt = [v, a]
return dydt
y0 = [0,5]
time = np.linspace(0.,10.,21)
F_lon = 100.
mass = 1000.
dydt = odeint(integral, y0, time, args=(F_lon,mass))
in this case I realise it is possible to algebraically solve for the implicit variable, however in my actual scenario there is a lot of logic between F_r and the evaluation of a and algebraic manipulation fails.
I believe the DAE could be solved using MATLAB's ode15i function, but I'm trying to avoid that scenario if at all possible.
My question is - is there a way to solve implicit ODE functions (DAE) in python( scipy preferably)? And is there a better way to pose the problem above to do so?
As a last resort, it may be acceptable to pass a from the previous time-step. How could I pass dydt[1] back into the function after each time-step?
Quite Old , but worth updating so it may be useful for anyone, who stumbles upon this question. There are quite few packages currently available in python that can solve implicit ODE.
GEKKO (https://github.com/BYU-PRISM/GEKKO) is one of the packages, that specializes on dynamic optimization for mixed integer , non linear optimization problems, but can also be used as a general purpose DAE solver.
The above "pretend physics" problem can be solved in GEKKO as follows.
m= GEKKO()
m.time = np.linspace(0,100,101)
F_l = m.Param(value=1000)
mass = m.Param(value =1000)
m.options.IMODE=4
m.options.NODES=3
F_r = m.Var(value=0)
x = m.Var(value=0)
v = m.Var(value=0,lb=0)
a = m.Var(value=5,lb=0)
m.Equation(x.dt() == v)
m.Equation(v.dt() == a)
m.Equation (F_r == (((1-a)/3)**2 + (2*(1+a)/3)**2 * v))
m.Equation (a == (1000 - F_l)/mass)
m.solve(disp=False)
plt.plot(x)
if algebraic manipulation fails, you can go for a numerical solution of your constraint, running for example fsolve at each timestep:
import sys
from numpy import linspace
from scipy.integrate import odeint
from scipy.optimize import fsolve
y0 = [0, 5]
time = linspace(0., 10., 1000)
F_lon = 10.
mass = 1000.
def F_r(a, v):
return (((1 - a) / 3) ** 2 + (2 * (1 + a) / 3) ** 2) * v
def constraint(a, v):
return (F_lon - F_r(a, v)) / mass - a
def integral(y, _):
v = y[1]
a, _, ier, mesg = fsolve(constraint, 0, args=[v, ], full_output=True)
if ier != 1:
print "I coudn't solve the algebraic constraint, error:\n\n", mesg
sys.stdout.flush()
return [v, a]
dydt = odeint(integral, y0, time)
Clearly this will slow down your time integration. Always check that fsolve finds a good solution, and flush the output so that you can realize it as it happens and stop the simulation.
About how to "cache" the value of a variable at a previous timestep, you can exploit the fact that default arguments are calculated only at the function definition,
from numpy import linspace
from scipy.integrate import odeint
#you can choose a better guess using fsolve instead of 0
def integral(y, _, F_l, M, cache=[0]):
v, preva = y[1], cache[0]
#use value for 'a' from the previous timestep
F_r = (((1 - preva) / 3) ** 2 + (2 * (1 + preva) / 3) ** 2) * v
#calculate the new value
a = (F_l - F_r) / M
cache[0] = a
return [v, a]
y0 = [0, 5]
time = linspace(0., 10., 1000)
F_lon = 100.
mass = 1000.
dydt = odeint(integral, y0, time, args=(F_lon, mass))
Notice that in order for the trick to work the cache parameter must be mutable, and that's why I use a list. See this link if you are not familiar with how default arguments work.
Notice that the two codes DO NOT produce the same result, and you should be very careful using the value at the previous timestep, both for numerical stability and precision. The second is clearly much faster though.

Solving an implicit quadratic system of 3 variables

I am trying to solve a system of equations that has 3 variables and a variable number of equations.
Basically, the system is between 5 and 12 equations long, and regardless of how many equations there are, I am trying to solve for 3 variables.
It looks like this:
(x-A)**2 + (y-B)**2 + (z-C)**2 = (c(t-d))**2
I know A,B,C, and the whole right side.
A,B,C and the right side are all arrays of length n, where n varies randomly between 5 and 12. So then we have a system of equations that changes in size.
I believe I need to use numpy's lstsq function and do something like:
data,data1 = getData() # I will have to do this for 2 unique systems.
A = data[:,0]
B = data[:,1]
C = data[:,2]
tid = data[:,3]
P = (x-A)**2 + (y-B)**2 + (z-C)**2
b = tid
solved = lstsq(P,b)
print solved
This however doesn't work, as we know that x,y,z are implicit, and therefore need to be taken out of P in order for this to work.
Help!
What you probably need is scipy.optimize.minimize() which works with arbitrary (nonlinear) equations. numpy.linalg.lstsq() only solves a system of linear equations, and this problem is pretty definitely nonlinear (although there are techniques to linearize systems of equations, I think this is not what you want in this case).
It is likely that a system of >3 equations in 3 variables has no solution, so you have to define how to measure how good a given "solution" is even though it doesn't actually solve the system of equations. How to pose this as a minimization problem depends on the physical or problem-domain interpretation what you are trying to actually do. One possibility is, for the following equations (which are a slightly rearranged version of yours)
(x-A1)**2 + (y-B1)**2 + (z-C1)**2 - T1**2 = 0
(x-A2)**2 + (y-B2)**2 + (z-C2)**2 - T2**2 = 0
...
try to minimize the sum of the absolute values of all the left hand sides (which should be zero if the equation is solved exactly). In other words, you want the x, y, z that produce the minimum of the following function
sum( abs( (x-A1)**2 + (y-B1)**2 + (z-C1)**2 - T1**2 ) + abs( (x-A2)**2 + (y-B2)**2 + (z-C2)**2 - T2**2 ) + ... )
Code example: v is ndarray of (3,) containing x, y, z; and A, B, C, tid are ndarrays of (N,) where N is the number of equations.
def F(v, A, B, C, tid):
x = v[0]
y = v[1]
z = v[2]
return numpy.sum( numpy.abs( (x-A)**2 + (y-B)**2 + (z-C)**2 - tid ) )
v_initial = numpy.array([x0, y0, z0]) # starting guesses
result = scipy.optimize.minimize(F, v_initial, args=(A, B, C, tid))
v = result.x
x, y, z = v.tolist() # the best solution found
This should be close to working but I haven't tested it. You may need some extra arguments to minimize(), for example method, tol, ...

Categories