Solving ill-posed non-linear equations numerically in python/SymPy - python

I'm trying to get a solution by running the code below.
Python just "hangs" and won't find a numeric solution. I can use an app on my phone (Desmos) to graph the functions and find a numeric solution easily, 0.024. Does python have limitations when solving for 2 decimal places?
import sympy
x = sympy.symbols('x')
e_1 = x**-0.5
e_2 = -2*sympy.log(0.0001*3.7**-1*0.05**-1+2.51*350000**-1*x**-0.5, 10)
sol = sympy.solve(e_2 - e_1, x, 0.024)
num = float(sol[0])
print(num)

Usually, nsolve is the SymPy tool used to numerically solve an equation (or a system of equations). However, I wasn't able to use: it kept raising errors. The problem is that your function is defined on a very small region, and the zero is very close to the boundary:
So, in this case we can try numerical root finding techniques. Again, some tools might fail, but after a few tries I've found that bisect works fine:
from sympy import *
from scipy.optimize import root, brentq, bisect
x = symbols('x')
# you didn't provide the diameter, so I've computed it based on your expected solution
d = 1.56843878182221
e_1 = x**-0.5
e_2 = -2 * log(0.00013 * 7-1*d-1+2.51350000**-1*x**-0.5, 10)
# convert the symbolic expression to a numerical function
ff = lambdify(x, e_1 - e_2)
root, output = bisect(ff, 0.023, 0.025, full_output=True)
print(root)
# 0.024000000001862646
print(output)
# converged: True
# flag: 'converged'
# function_calls: 32
# iterations: 30
# root: 0.024000000001862646

The fixed point method is a great one to use for situations like this. Or at least the principles of transforming the equation into a compatible form can benefit standard solvers by providing a less ill-behaved form of the function.
You have an ill-defined equation in the form y - g(y) where y = 1/sqrt(x). So let's get the inverse of g (call it G) so we can solve G(y) - G(g(y)) = G(y) - y instead.
>>> g = e_2.subs(1/x**.5, y)
>>> d = Dummy()
>>> G = solve(g - d, y)[0].subs(d, y)
>>> nsolve(G - y, 6)
6.45497224367903
>>> solve(1/x**.5 - _, dict=True)
{x: 0.024}
The process of rearranging an equation f(x) into form x - g(x) could probably use a built-in method in SymPy, but it's not too hard to do this by replacing each x with a dummy variable, solving for it, and then replacing the dummy symbols with x again. Different g will be more favorable for finding different roots as can be seen in the example below where the purple dashed line is good for finding the root near 1 while the solid blue is better near the smaller root.
Here is a possibility for a "fixed point form" function:
def fixedpoint_Eqs(eq, x=None):
"""rearrange to give eq in form x = g(x)"""
f = eq.free_symbols
fp = []
if x is None:
assert len(f) == 1, 'must specify x in this case'
x = list(f)[0]
Xeq = eq.replace(lambda _:_ == x, lambda _:Dummy())
X = Xeq.free_symbols - f
reps = {xi: x for xi in X}
for xi in X:
try:
g = solve(Xeq, xi, dict=True)
if len(g) != 1:
raise NotImplementedError
fp.append(Eq(x, g[0][xi].xreplace(reps)))
except NotImplementedError:
pass
return fp
>>> fixedpoint_Eqs(x+exp(x)+1/x-5)
Eq(x, -1/(x + exp(x) - 5))
Eq(x, -exp(x) + 5 - 1/x)
Eq(x, log(-x + 5 - 1/x))

Related

Python script for finding intersections for a graph function

I have this python code for finding intersections in the function "f(x)=x**2+x-2" with "g(x)=6-x"
import math
#brute force the functions with numbers until his Y values match, and then, do this for the other point.
def funcs():
for X in range(-100, 100):
funcA = (X**2)+X-2
funcB = 6 - X
if funcA == funcB:
print("##INTERSECTION FOUND!!")
print(f"({X},{funcA})")
print(f"({X},{funcB})")
else:
pass
funcs()
But my problem is the script only works with THAT SPECIFIC MATH FUNCTION, if I try to change the math function a little bit, the code won't work.
The code just checks when the Y values of the f(x) and the g(x) match together, and do the same for the other point.
Here it is the output:
##INTERSECTION FOUND!!
(-4,10)
(-4,10)
##INTERSECTION FOUND!!
(2,4)
(2,4)
In general, this is a root finding problem.
Define h(x) = f(x) - g(x).
The intersection point x implies f(x)=g(x) or h(x)=0.
For root-finding problems, there are many methods, say, bisection method, newton's method.
Here is a numerical example with bisection method.
def f(x):
return x ** 2 + x - 2
def g(x):
return 6 - x
def h(x):
return f(x) - g(x)
def bisection(a, b):
eps = 10 ** -10
ha = h(a)
hb = h(b)
if ha * hb > 0:
raise ValueError("Bad input")
for i in range(1000): # fix iteration number
ha = h(a)
midpoint = (a + b) / 2
hm = h(midpoint)
if abs(hm) < eps:
return midpoint
if hm * ha >= 0:
a = midpoint
else:
b = midpoint
raise RuntimeError("Out of iterations")
if __name__ == '__main__':
print(bisection(0, 100))
print(bisection(-100, 0))
Output:
1.999999999998181
-3.999999999996362
Why the numbers so close but not exact? because the problem is solved numerically. Other answers that utilize the sympy package solves the problem symbolically, which give the exact answer. But they only work with simple problems.
Why [0, 100] and [-100, 0]? this is because I sketched the graph somehow and know there is a root within the interval. In practice, the bisection method requires the interval [a,b] such that h(a) * h(b) < 0. Given a big interval [-100,100] and, thus, h(-100) * h(100) > 0, bisection method does not work this case. The big interval is partitioned such that some of the sub-intervals [a,b] satisfy the condition h(a) * h(b) < 0, say, [-100, 0] and [0, 100]
Why abs(hm) < eps? This tests whether hm is close to 0. In computers, we consider two floating-point numbers equal if the absolute value of their difference abs(hm) is smaller than a threshold eps. eps is usually 10 ** -10 to 10 ** -15 because there are usually 15 decimal significant digits for float numbers in Python or computer.
Newton's method will give you one of the outputs depending on the initial point.
For further study, search root finding problem or numerical root finding.
As you want the intersection, hence you are looking for a solution for f(x) - g(x) = 0. So, you can use fsolve in python to find the root of f(x) - g(x):
from scipy.optimize import fsolve
def func(X):
funcA = (X ** 2) + X - 2
funcB = 6 - X
return (funcA - funcB)
x = fsolve(func,0)
print(x)
You could employ sympy, Python's symbolic math library:
from sympy import symbols, Eq, solve
X = symbols('X', real=True)
funcA = (X ** 2) + X - 2
funcB = 6 - X
sol = solve(Eq(funcA, funcB))
print(sol) # --> [-4, 2]
To obtain the corresponding values for funcA and funcB
for s in sol:
print(f'X={s} funcA({s})={funcA.subs(X, s)} funcB({s})={funcB.subs(X, s)} ')
# X=-4 funcA(-4)=10 funcB(-4)=10
# X=2 funcA(2)=4 funcB(2)=4
For some functions, the result could still be symbolically, as that is the most exact form. .evalf() obtains a numeric approximation. For example:
funcA = X ** 2 + X - 2
funcB = - 2*X ** 2 + X + 7
sol = solve(Eq(funcA, funcB))
for s in sol:
print(f'X={s} funcA(X)={funcA.subs(X, s)} funcB(X)={funcB.subs(X, s)}')
print(f'X={s.evalf()} funcA(X)={funcA.subs(X, s).evalf()} funcB(X)={funcB.subs(X, s).evalf()}')
Output:
X=-sqrt(3) funcA(X)=1 - sqrt(3) funcB(X)=1 - sqrt(3)
X=-1.73205080756888 funcA(X)=-0.732050807568877 funcB(X)=-0.732050807568877
X=sqrt(3) funcA(X)=1 + sqrt(3) funcB(X)=1 + sqrt(3)
X=1.73205080756888 funcA(X)=2.73205080756888 funcB(X)=2.73205080756888

Having trouble with using sympy subs command when trying to solve a function at an x value of 0

I have a function [ -4*x/sqrt(1 - (1 - 2*x^2)^2) + 2/sqrt(1 - x^2) ] that I need to evaluate at x=0. However, whenever you graph this function, for some interval of y there are many y-values at x=0. This leads me to think that the (subs) command can only return one y-value. Any help or elaboration on this? Thank you!
Here's my code if it might help:
x = symbols('x')
f = 2*asin(x) # f(x) function
g = acos(1-2*x**2) # g(x) function
eq = diff(f-g) # evaluating the derivative of f(x) - g(x)
eq.subs(x, 0) # substituting 0 for x in the derivative of f(x) - g(x)
After I run the code, it returns NaN, which I assume is because substituting in 0 for x returns not a single number, but a range of numbers.
Here is the graph of the function to be evaluated at x=0:
You should always give SymPy as many assumptions as possible. For example, it can't pull an x**2 out of a sqrt because it thinks x is complex.
A factorization an then a simplification solves the problem. SymPy can't do L'Hopital on eq = A + B since it does not know that both A and B converge. So you have to guide it a little by bringing the fractions together and then simplifying:
from sympy import *
x = symbols('x', real=True)
f = 2*asin(x) # f(x) function
g = acos(1-2*x**2) # g(x) function
eq = diff(f-g) # evaluating the derivative of f(x) - g(x)
eq = simplify(factor(eq))
print(eq)
print(limit(eq, x, 0, "+"))
print(limit(eq, x, 0, "-"))
Outputs:
(-2*x + 2*Abs(x))/(sqrt(1 - x**2)*Abs(x))
0
4
simplify, factor and expand do wonders.

solving differential equation with step function

I am trying to solve this differential equation as part of my assignment. I am not able to understand on how can i put the condition for u in the code. In the code shown below, i arbitrarily provided
u = 5.
2dx(t)dt=−x(t)+u(t)
5dy(t)dt=−y(t)+x(t)
u=2S(t−5)
x(0)=0
y(0)=0
where S(t−5) is a step function that changes from zero to one at t=5. When it is multiplied by two, it changes from zero to two at that same time, t=5.
def model(x,t,u):
dxdt = (-x+u)/2
return dxdt
def model2(y,x,t):
dydt = -(y+x)/5
return dydt
x0 = 0
y0 = 0
u = 5
t = np.linspace(0,40)
x = odeint(model,x0,t,args=(u,))
y = odeint(model2,y0,t,args=(u,))
plt.plot(t,x,'r-')
plt.plot(t,y,'b*')
plt.show()
I do not know the SciPy Library very well, but regarding the example in the documentation I would try something like this:
def model(x, t, K, PT)
"""
The model consists of the state x in R^2, the time in R and the two
parameters K and PT regarding the input u as step function, where K
is the infimum of u and PT is the delay of the step.
"""
x1, x2 = x # Split the state into two variables
u = K if t>=PT else 0 # This is the system input
# Here comes the differential equation in vectorized form
dx = [(-x1 + u)/2,
(-x2 + x1)/5]
return dx
x0 = [0, 0]
K = 2
PT = 5
t = np.linspace(0,40)
x = odeint(model, x0, t, args=(K, PT))
plt.plot(t, x[:, 0], 'r-')
plt.plot(t, x[:, 1], 'b*')
plt.show()
You have a couple of issues here, and the step function is only a small part of it. You can define a step function with a simple lambda and then simply capture it from the outer scope without even passing it to your function. Because sometimes that won't be the case, we'll be explicit and pass it.
Your next problem is the order of arguments in the function to integrate. As per the docs (y,t,...). Ie, First the function, then the time vector, then the other args arguments. So for the first part we get:
u = lambda t : 2 if t>5 else 0
def model(x,t,u):
dxdt = (-x+u(t))/2
return dxdt
x0 = 0
y0 = 0
t = np.linspace(0,40)
x = odeint(model,x0,t,args=(u,))
Moving to the next part, the trouble is, you can't feed x as an arg to y because it's a vector of values for x(t) for particular times and so y+x doesn't make sense in the function as you wrote it. You can follow your intuition from math class if you pass an x function instead of the x values. Doing so requires that you interpolate the x values using the specific time values you are interested in (which scipy can handle, no problem):
from scipy.interpolate import interp1d
xfunc = interp1d(t.flatten(),x.flatten(),fill_value="extrapolate")
#flatten cuz the shape is off , extrapolate because odeint will go out of bounds
def model2(y,t,x):
dydt = -(y+x(t))/5
return dydt
y = odeint(model2,y0,t,args=(xfunc,))
Then you get:
#Sven's answer is more idiomatic for vector programming like scipy/numpy. But I hope my answer provides a clearer path from what you know already to a working solution.

Partial symbolic derivative in Python

I need to partially derivate my equation and form a matrix out of the derivatives. My equation is:
While this conditions must be met:
For doing this I've used the sympy module and its diff() function. My code so far is:
from sympy import*
import numpy as np
init_printing() #delete if you dont have LaTeX installed
logt_r, logt_a, T, T_a, a_0, a_1, a_2, logS, Taa_0, Taa_1, Taa_2 = symbols('logt_r, logt_a, T, T_a, a_0, a_1, a_2, logS, Taa_0, Taa_1, Taa_2')
A = (logt_r - logt_a - (T - T_a) * (a_0 + a_1 * logS + a_2 * logS**2) )**2
parametri = [logt_a, a_0, Taa_0, a_1, Taa_1, a_2, Taa_2]
M = expand(A)
M = M.subs(T_a*a_0, Taa_0)
M = M.subs(T_a*a_1, Taa_1)
M = M.subs(T_a*a_2, Taa_2)
K = zeros(len(parametri), len(parametri))
O = []
def odv(par):
for j in range(len(par)):
for i in range(len(par)):
P = diff(M, par[i])/2
B = P.coeff(par[j])
K[i,j] = B
return K
odv(parametri)
My result:
My problem
The problem that I'm having is in the partial derivatives of products (T_aa_0, T_aa_1 and T_a*a_2), because by using the diff() function, you cannot derivate a function with a product (obviously), else you get an error:
ValueError:
Can't calculate 1-th derivative wrt T_a*a_0.
To solve this I substitued this products with coefficients, like:
M = M.subs(T_a*a_0, Taa_0)
M = M.subs(T_a*a_1, Taa_1)
M = M.subs(T_a*a_2, Taa_2)
But as you can see in the final result, this works only in some cases. I would like to know if there is a better way of doing this where I wouldn't need to substitude the products and that it would work in all cases.
ADDITIONAL INFORMATION
Let me rephrase my question. Is it possible to symbolically derive an equation with a function by using python or in that matter, to use the sympy module?
So I've managed to solve my problem on my own. The main question was how to symbolically derive a function or equation with another function. As I've gone again slowly over the sympy documentation, I saw a little detail, that I've missed before.
In order to derive a function with a function you need to change the settings of the function, that will be used to derive. For example:
x, y, z = symbols('x, y, z')
A = x*y*z
B = x*y
# This is the detail:
type(B)._diff_wrt = True
diff(A, B)
Or in my case, the code looks like:
koef = [logt_a, a_0, T_a*a_0, a_1, T_a*a_1, a_2, T_a*a_2]
M = expand(A)
K = zeros(len(koef), len(koef))
def odvod_mat(par):
for j in range(len(par)):
for i in range(len(par)):
type(par[i])._diff_wrt = True
P = diff(M, par[i])/2
B = P.coeff(par[j])
K[i,j] = B
#Removal of T_a
K[i,j] = K[i,j].subs(T_a, 0)
return K
odvod_mat(koef)
Thanks again to all that were taking their time to read this. I hope this helps to anyone, who will have the same problem as I did.

two dimensional fit with python

I need to fit a function
z(u,v) = C u v^p
That is, I have a two-dimensional data set, and I have to find two parameters, C and p. Is there something in numpy or scipy that can do this in a straightforward manner? I took a look at scipy.optimize.leastsq, but it's not clear to me how I would use it here.
def f(x,u,v,z_data):
C = x[0]
p = x[1]
modelled_z = C*u*v**p
diffs = modelled_z - z_data
return diffs.flatten() # it expects a 1D array out.
# it doesn't matter that it's conceptually 2D, provided flatten it consistently
result = scipy.optimize.leastsq(f,[1.0,1.0], # initial guess at starting point
args = (u,v,z_data) # alternatively you can do this with closure variables in f if you like
)
# result is the best fit point
For your specific function you might be able to do it better - for example, for any given value of p there is one best value of C that can be determined by straightforward linear algebra.
You can transform the problem into a simple linear least squares problem, and then you don't need leastsq() at all.
z[i] == C * u[i] * v[i]**p
becomes
z[i]/u[i] == C * v[i]**p
And then
log(z[i]/u[i]) == log(C) + p * log(v[i])
Change variables and you can solve as a simple linear problem:
Z[i] == L + p * V[i]
Using numpy and assuming you have the data in arrays z, u and v, this is rendered as:
Z = log(z/u)
V = log(v)
p, L = np.polyfit(V, Z, 1)
C = exp(L)
You probably ought to put a try: and except: around it in case some of the u values are zero or there are negative values.

Categories