Scipy fsolve diverges towards infinity instead of the solution - python

I want to solve this equation:
However, when I try to solve it with scipy fsolve, it converges towards infinity instead of giving me the solution.
The reason why it goes to infinity is that the function tends to 0 when x tends to infinity:
Here you have the sample code:
def f(x, r): return -e ** (-r * x)
def h(r): return 2 * f(4, r) - f(2, r) - f(10, r)
x0 = np.array([1])
print(optimize.fsolve(h, x0))
With some other parameters it finds the solution. However, I want the code to work with different parameters, not just the ones in the example. I also want to avoid the zero solution.
Many thanks

If you let t = exp(-2x) then the equation is just polynomial, so you can solve it with numpy.roots
import numpy as np
roots = np.roots([[-1, 0, 0, 2, -1, 0])
solutions = map(lambda x: -log(x)/2, roots)
gives you 3 real and 2 complex solutions.

Related

Solving equation containing integrals with python

I'm currently trying to solve the following equation for x:
3.17e-2 - integral from x to 215 of [(10.^(8.64/x) / (480.1 - 10.^(4.32/x))^2)]dx = 0.
(sorry for writing the equation in such a crude way, I wasn't sure on how to insert latex on here)
so far I've come up with this:
import scipy as s
from scipy.integrate import odeint,quad
import numpy as np
def f(x):
fpe = 40
k = 1.26e4*fpe**2/4.2e4
return 10.**(8.64/x) / (k - 10.**(4.32/x))**2
def intf(x):
for i in x:
if 3.17e-2 - quad(lambda i:f(i),i,215) == 0.:
print(i)
xi = np.linspace(0.01, 5, 1000)
intf(xi)
However, I keep getting the following error:
OverflowError: (34, 'Result too large')
As you can imagine, this is not the result I was expecting. Do you reckon that this is only due to the result being too large or could there be something wrong with the code?
One thing you have to change quad returns a tuple (y, abserr), the result of the integral is quad(...)[0]
Also, if you compare f(x) == 0 you will only detect exact solutions, that will be impossible for this function in floating point computation. You could use abs(f(x)) < ytol, or simply use a zero finding method. I would suggest you to use fsolve
Another thing is that you have the derivative of the function, so you can pass that to the fsolve as well, putting all together you have
import numpy as np
from scipy.integrate import quad
from scipy.optimize import fsolve
def fprime(x):
fpe = 40
k = 1.26e4*fpe**2/4.2e4
return 10.**(8.64/x) / (k - 10.**(4.32/x))**2
def f(x):
try:
return np.array([f(i) for i in x])
except TypeError:
return 3.17e-2 - quad(lambda i:fprime(i),x,215)[0]
from scipy.optimize import fsolve
x0 = fsolve(f, 1, fprime=fprime)
this gives x0=2.03740802, and f(x0) = 2.35922393e-16

How to put an integral in a function in python/matplotlib

So pretty much, I am aiming to achieve a function f(x)
My problem is that my function has an integral in it, and I only know how to construct definite integrals, so my question is how does one create an indefinite integral in a function (or there may be some other method I am currently unaware of)
My function is defined as :
(G is gravitational constant, although you can leave G out of your answer for simplicity, I'll add it in my code)
Here is the starting point, but I don't know how to do the integral portion
import numpy as np
def f(x):
rho = 5*(1/(1+((x**2)/(3**2))))
function_result = rho * 4 * np.pi * x**2
return function_result
Please let me know if I need to elaborate on something.
EDIT-----------------------------------------------------
I made some major progress, but I still have one little error.
Pretty much, I did this:
from sympy import *
x = Symbol('x')
rho = p0()*(1/(1+((x**2)/(rc()**2))))* 4 * np.pi * x**2
fooply = integrate(rho,x)
def f(rx):
function_result = fooply.subs({x:rx})
return function_result
Which works fine when I plug in one number for f; however, when I plug in an array (as I need to later), I get the error:
raise SympifyError(a)
sympy.core.sympify.SympifyError: SympifyError: [3, 3, 3, 3, 3]
(Here, I did print(f([3,3,3,3,3]))). Usually, the function returns an array of values. So if I did f([3,2]) it should return [f(3),f(2)]. Yet, for some reason, it doesn't for my function....
Thanks in advance
how about:
from sympy import *
x, p0, rc = symbols('x p0 rc', real=True, positive=True)
rho = p0*(1/(1+((x**2)/(rc))))* 4 * pi * x**2
fooply = integrate(rho,x)/x
rho, fooply
(4*pi*p0*x**2/(1 + x**2/rc),
4*pi*p0*rc*(-sqrt(rc)*atan(x/sqrt(rc)) + x)/x)
fooply = fooply.subs({p0: 2.0, rc: 3.0})
np_fooply = lambdify(x, fooply, 'numpy')
print(np_fooply(np.array([3,3,3,3,3])))
[ 29.81247362 29.81247362 29.81247362 29.81247362 29.81247362]
To plug in an array to a SymPy expression, you need to use lambdify to convert it to a NumPy function (f = lambdify(x, fooply)). Just using def and subs as you have done will not work.
Also, in general, when using symbolic computations, it's better to use sympy.pi instead of np.pi, as the former is symbolic and can simplify. It will automatically be converted to the numeric pi by lambdify.

How to Integrate Arc Lengths using python, numpy, and scipy?

On another thread, I saw someone manage to integrate the length of a arc using mathematica.They wrote:
In[1]:= ArcTan[3.05*Tan[5Pi/18]/2.23]
Out[1]= 1.02051
In[2]:= x=3.05 Cos[t];
In[3]:= y=2.23 Sin[t];
In[4]:= NIntegrate[Sqrt[D[x,t]^2+D[y,t]^2],{t,0,1.02051}]
Out[4]= 2.53143
How exactly could this be transferred to python using the imports of numpy and scipy? In particular, I am stuck on line 4 in his code with the "NIntegrate" function. Thanks for the help!
Also, if I already have the arc length and the vertical axis length, how would I be able to reverse the program to spit out the original paremeters from the known values? Thanks!
To my knowledge scipy cannot perform symbolic computations (such as symbolic differentiation). You may want to have a look at http://www.sympy.org for a symbolic computation package. Therefore, in the example below, I compute derivatives analytically (the Dx(t) and Dy(t) functions).
>>> from scipy.integrate import quad
>>> import numpy as np
>>> Dx = lambda t: -3.05 * np.sin(t)
>>> Dy = lambda t: 2.23 * np.cos(t)
>>> quad(lambda t: np.sqrt(Dx(t)**2 + Dy(t)**2), 0, 1.02051)
(2.531432761012828, 2.810454936566873e-14)
EDIT: Second part of the question - inverting the problem
From the fact that you know the value of the integral (arc) you can now solve for one of the parameters that determine the arc (semi-axes, angle, etc.) Let's assume you want to solve for the angle. Then you can use one of the non-linear solvers in scipy, to revert the equation quad(theta) - arcval == 0. You can do it like this:
>>> from scipy.integrate import quad
>>> from scipy.optimize import broyden1
>>> import numpy as np
>>> a = 3.05
>>> b = 2.23
>>> Dx = lambda t: -a * np.sin(t)
>>> Dy = lambda t: b * np.cos(t)
>>> arc = lambda theta: quad(lambda t: np.sqrt(Dx(t)**2 + Dy(t)**2), 0, np.arctan((a / b) * np.tan(np.deg2rad(theta))))[0]
>>> invert = lambda arcval: float(broyden1(lambda x: arc(x) - arcval, np.rad2deg(arcval / np.sqrt((a**2 + b**2) / 2.0))))
Then:
>>> arc(50)
2.531419526553662
>>> invert(arc(50))
50.000031008458365
If you prefer a pure numerical approach, you could use the following barebones solution. This worked well for me given that I had two input numpy.ndarrays, x and y with no functional form available.
import numpy as np
def arclength(x, y, a, b):
"""
Computes the arclength of the given curve
defined by (x0, y0), (x1, y1) ... (xn, yn)
over the provided bounds, `a` and `b`.
Parameters
----------
x: numpy.ndarray
The array of x values
y: numpy.ndarray
The array of y values corresponding to each value of x
a: int
The lower limit to integrate from
b: int
The upper limit to integrate to
Returns
-------
numpy.float64
The arclength of the curve
"""
bounds = (x >= a) & (y <= b)
return np.trapz(
np.sqrt(
1 + np.gradient(y[bounds], x[bounds])
) ** 2),
x[bounds]
)
Note: I spaced the return variables out that way just to make it more readable and clear to understand the operations taking place.
As an aside, recall that the arc-length of a curve is given by:

Euler's method in python

I'm trying to implement euler's method to approximate the value of e in python. This is what I have so far:
def Euler(f, t0, y0, h, N):
t = t0 + arange(N+1)*h
y = zeros(N+1)
y[0] = y0
for n in range(N):
y[n+1] = y[n] + h*f(t[n], y[n])
f = (1+(1/N))^N
return y
However, when I try to call the function, I get the error "ValueError: shape <= 0". I suspect this has something to do with how I defined f? I tried inputting f directly when euler is called, but gave me errors related to variables not being defined. I also tried defining f as its own function, which gave me a division by 0 error.
def f(N):
for n in range(N):
return (1+(1/n))^n
(not sure if N was the appropriate variable to use here...)
The formula you are trying to use is not Euler's method, but rather the exact value of e as n approaches infinity wiki,
$n = \lim_{n\to\infty} (1 + \frac{1}{n})^n$
Euler's method is used to solve first order differential equations.
Here are two guides that show how to implement Euler's method to solve a simple test function: beginner's guide and numerical ODE guide.
To answer the title of this post, rather than the question you are asking, I've used Euler's method to solve usual exponential decay:
$\frac{dN}{dt} = -\lambda N$
Which has the solution,
$N(t) = N_0 e^{-\lambda t}$
Code:
import numpy as np
import matplotlib.pyplot as plt
from __future__ import division
# Concentration over time
N = lambda t: N0 * np.exp(-k * t)
# dN/dt
def dx_dt(x):
return -k * x
k = .5
h = 0.001
N0 = 100.
t = np.arange(0, 10, h)
y = np.zeros(len(t))
y[0] = N0
for i in range(1, len(t)):
# Euler's method
y[i] = y[i-1] + dx_dt(y[i-1]) * h
max_error = abs(y-N(t)).max()
print 'Max difference between the exact solution and Euler's approximation with step size h=0.001:'
print '{0:.15}'.format(max_error)
Output:
Max difference between the exact solution and Euler's approximation with step size h=0.001:
0.00919890254720457
Note: I'm not sure how to get LaTeX displaying properly.
Are you sure you are not trying to implement the Newton's method? Because Newton's method is used to approximate the roots.
In case you decide to go with Newton's method, here is a slightly changed version of your code that approximates the square-root of 2. You can change f(x) and fp(x) with the function and its derivative you use in your approximation to the thing you want.
import numpy as np
def f(x):
return x**2 - 2
def fp(x):
return 2*x
def Newton(f, y0, N):
y = np.zeros(N+1)
y[0] = y0
for n in range(N):
y[n+1] = y[n] - f(y[n])/fp(y[n])
return y
print Newton(f, 1, 10)
gives
[ 1. 1.5 1.41666667 1.41421569 1.41421356 1.41421356
1.41421356 1.41421356 1.41421356 1.41421356 1.41421356]
which are the initial value and the first ten iterations to the square-root of two.
Besides this a big problem was the usage of ^ instead of ** for powers which is a legal but a totally different (bitwise) operation in python.

Solve an implicit ODE (differential algebraic equation DAE)

I'm trying to solve a second order ODE using odeint from scipy. The issue I'm having is the function is implicitly coupled to the second order term, as seen in the simplified snippet (please ignore the pretend physics of the example):
import numpy as np
from scipy.integrate import odeint
def integral(y,t,F_l,mass):
dydt = np.zeros_like(y)
x, v = y
F_r = (((1-a)/3)**2 + (2*(1+a)/3)**2) * v # 'a' implicit
a = (F_l - F_r)/mass
dydt = [v, a]
return dydt
y0 = [0,5]
time = np.linspace(0.,10.,21)
F_lon = 100.
mass = 1000.
dydt = odeint(integral, y0, time, args=(F_lon,mass))
in this case I realise it is possible to algebraically solve for the implicit variable, however in my actual scenario there is a lot of logic between F_r and the evaluation of a and algebraic manipulation fails.
I believe the DAE could be solved using MATLAB's ode15i function, but I'm trying to avoid that scenario if at all possible.
My question is - is there a way to solve implicit ODE functions (DAE) in python( scipy preferably)? And is there a better way to pose the problem above to do so?
As a last resort, it may be acceptable to pass a from the previous time-step. How could I pass dydt[1] back into the function after each time-step?
Quite Old , but worth updating so it may be useful for anyone, who stumbles upon this question. There are quite few packages currently available in python that can solve implicit ODE.
GEKKO (https://github.com/BYU-PRISM/GEKKO) is one of the packages, that specializes on dynamic optimization for mixed integer , non linear optimization problems, but can also be used as a general purpose DAE solver.
The above "pretend physics" problem can be solved in GEKKO as follows.
m= GEKKO()
m.time = np.linspace(0,100,101)
F_l = m.Param(value=1000)
mass = m.Param(value =1000)
m.options.IMODE=4
m.options.NODES=3
F_r = m.Var(value=0)
x = m.Var(value=0)
v = m.Var(value=0,lb=0)
a = m.Var(value=5,lb=0)
m.Equation(x.dt() == v)
m.Equation(v.dt() == a)
m.Equation (F_r == (((1-a)/3)**2 + (2*(1+a)/3)**2 * v))
m.Equation (a == (1000 - F_l)/mass)
m.solve(disp=False)
plt.plot(x)
if algebraic manipulation fails, you can go for a numerical solution of your constraint, running for example fsolve at each timestep:
import sys
from numpy import linspace
from scipy.integrate import odeint
from scipy.optimize import fsolve
y0 = [0, 5]
time = linspace(0., 10., 1000)
F_lon = 10.
mass = 1000.
def F_r(a, v):
return (((1 - a) / 3) ** 2 + (2 * (1 + a) / 3) ** 2) * v
def constraint(a, v):
return (F_lon - F_r(a, v)) / mass - a
def integral(y, _):
v = y[1]
a, _, ier, mesg = fsolve(constraint, 0, args=[v, ], full_output=True)
if ier != 1:
print "I coudn't solve the algebraic constraint, error:\n\n", mesg
sys.stdout.flush()
return [v, a]
dydt = odeint(integral, y0, time)
Clearly this will slow down your time integration. Always check that fsolve finds a good solution, and flush the output so that you can realize it as it happens and stop the simulation.
About how to "cache" the value of a variable at a previous timestep, you can exploit the fact that default arguments are calculated only at the function definition,
from numpy import linspace
from scipy.integrate import odeint
#you can choose a better guess using fsolve instead of 0
def integral(y, _, F_l, M, cache=[0]):
v, preva = y[1], cache[0]
#use value for 'a' from the previous timestep
F_r = (((1 - preva) / 3) ** 2 + (2 * (1 + preva) / 3) ** 2) * v
#calculate the new value
a = (F_l - F_r) / M
cache[0] = a
return [v, a]
y0 = [0, 5]
time = linspace(0., 10., 1000)
F_lon = 100.
mass = 1000.
dydt = odeint(integral, y0, time, args=(F_lon, mass))
Notice that in order for the trick to work the cache parameter must be mutable, and that's why I use a list. See this link if you are not familiar with how default arguments work.
Notice that the two codes DO NOT produce the same result, and you should be very careful using the value at the previous timestep, both for numerical stability and precision. The second is clearly much faster though.

Categories