Avoid evaluating the function with same input multiple times - python

I am trying to utilise scipy.optimise.fsolve for solving a function. I noticed that the function is evaluated with the same value multiple times in the beginning and the end of the iteration steps. For example when the following code is evaluated:
from scipy.optimize import fsolve
def yy(x):
print(x)
return x**2+9*x+20
y = fsolve(yy,22.)
print(y)
The following output is obtained:
[ 22.]
[ 22.]
[ 22.]
[ 22.00000033]
[ 8.75471707]
[ 4.34171812]
[ 0.81508685]
[-1.16277103]
[-2.42105811]
[-3.17288066]
[-3.61657372]
[-3.85653348]
[-3.96397335]
[-3.99561793]
[-3.99984826]
[-3.99999934]
[-4.]
[-4.]
[-4.]
Therefore the function is evaluated with 22. three times, which is unnecessary.
This is especially annoying when the function requires substantial evaluation time. Could anyone please explain this and suggest how to avoid this issue?

The first evaluation is done only to check the shape and data type of the output of the function. Specifically, fsolve calls _root_hybr which contains the line
shape, dtype = _check_func('fsolve', 'func', func, x0, args, n, (n,))
Naturally, _check_func calls the function:
res = atleast_1d(thefunc(*((x0[:numinputs],) + args)))
Since only the shape and data type are retained from this evaluation, the solver will be calling the function with the value x0 again when actual root finding process begins.
The above accounts for one extraneous call (out of two). I did not track down the other one, but it's conceivable that the FORTRAN code does some kind of preliminary check of its own. This sort of thing happens when algorithms written long ago get wrapped over and over again.
If you really want to save these two evaluations of expensive function yy, one way is to compute the value yy(x0) separately and store it. For example:
def yy(x):
if x == x0 and y0 is not None:
return y0
print(x)
return x**2+9*x+20
x0 = 22.
y0 = None
y0 = yy(x0)
y = fsolve(yy, x0)

I realised an important reason for this issue is that fsolve is not meant for such a problem: Solvers should be chosen wisely :)
multivariate: fmin, fmin_powell, fmin_cg, fmin_bfgs, fmin_ncg
nonlinear: leastsq
constrained: fmin_l_bfgs_b, fmin_tnc, fmin_cobyla
global: basinhopping, brute, differential_evolution
local: fminbound, brent, golden, bracket
n-dimensional: fsolve
one-dimensional: brenth, ridder, bisect, newton
scalar: fixed_point

Related

How to include adjustable parameter in fsolve?

I'm getting familiar with fsolve in Python and I am having trouble including adjustable parameters in my system of nonlinear equations. This link seems to answer my question but I still get errors. Below is my code:
import scipy.optimize as so
def test(x,y,z):
eq1 = x**2+y**2-z
eq2 = 2*x+1
return [eq1,eq2]
z = 1 # Ajustable parameter
sol = so.fsolve(test , [-1,2] ,args=(z)) # Solution Array
print(sol) # Display Solution
The output gives
TypeError: test() missing 1 required positional argument: 'z'
When z is clearly defined as an argument. How do I include this adjustable parameter?
So before posting here I should have spent a little bit more time playing with it. Here is what I found
import scipy.optimize as so
import numpy as np
def test(variables,z): #Define function of variables and adjustable arg
x,y = variables #Declare variables
eq1 = x**2+y**2-1-z #Equation to solve #1
eq2 = 2*x+1 #Equation to solve #2
return [eq1,eq2] #Return equation array
z = 1 #Ajustable parameter
initial = [1,2] #Initial condition list
sol = so.fsolve(test , initial, args = (z)) #Call fsolve
print(np.array(sol)) #Display sol
With an output
[-0.5 1.32287566]
I'm not the best at analyzing code, but I think my issue was that I had mixed up my variables and arguments in test(x,y,z) such that it didn't know what I was trying to apply the initial guess to.
Anyway, I hope this helped someone.
edit: While I'm here, the test function is a circle of adjustable radius and a line that intersects it at two points.
If you wanted to find the positive and negative solutions, you'd need to pass your initial guess in as an array (someone asked a similar question here). Here is the updated version
z = 1
initial = [[-2,-1],[2,1]]
sol = []
for i in range(len(initial)):
sol.append(so.fsolve(test , initial[i], args = (z)))
print(np.array(sol))
The output is
[[-0.5 -1.32287566]
[-0.5 1.32287566]]

Scipy Minimize and np.linlag.norm

This is my first time posting a question here, so please bear with me. Now, I was "playing" with the minimize function and I came across something odd.
The set up is simple, all I want to do is to minimize the euclidian norm of the form:
sqrt[(x1 -a1)^2+(x2 -a2)^2+(x3 -a3)^2+(x4 -a4)^2]
Where x's are variables and a's are some fixed numbers, constants. Of course the solution is simple, xi=ai for i=1,2,3,4; and the value of the function is zero.
Now, when I do this code in python, everything works fine:
import numpy as np
from scipy.optimize import minimize
def w(x):
a = np.array([[0,1,1,0]]).T
return np.sqrt((x[0]-a[0])**2 + (x[1]-a[1])**2 + (x[2]-a[2])**2+(x[3]-a[3])**2)
np.random.seed(1)
x0 = np.random.random((4,1))
min_fun1 = minimize(w,x0,method='nelder-mead',options={'xatol': 1e-8, 'disp': True})
print(min_fun1.fun)
print(min_fun1.x)
As I said, this works just fine. The value of the function is zero and xi=ai. But, when I try to set up the problem using the np.linlag.norm function, it gets weird. For example, when I run this code:
def y(x):
a = np.array([[0,1,1,0]]).T
return np.linalg.norm(x-a)
min_fun0 = minimize(y,x0,method='nelder-mead',options={'xatol': 1e-8, 'disp': True})
min_fun0.fun
min_fun0.x
It seems like the optimization gets stuck, the value of the function is 2 and xi=0.5, which obviously it is not correct.
Even more, if I try to do it with the dot product, the result is different but still incorrect:
def v(x):
a = np.array([[0,1,1,0]]).T
return np.sqrt(np.dot((x-a).T,(x-a)))
min_fun2 = minimize(v,x0,method='nelder-mead',options={'xatol': 1e-8, 'disp': True})
min_fun2.fun
min_fun2.x
What I was able to understand is that it has something to do with the "type" of the object. For example, no.linalg.norm returns a float64 and for the example that works is ndarray with size 1. So, I change the "type" for no.linalg.norm result to match ndarray size 1, but the problem persists.
Looks like you're trying to minimize a 1d array, but np.array([[0,1,1,0]]) is actually 2d (shape is (1, 4)).
If you convert your array to 1d the minimisation seems to work:
def y(x):
a = np.array([0,1,1,0])
return np.linalg.norm(x-a)
min_fun0 = minimize(y,x0,method='nelder-mead',options={'xatol': 1e-8, 'disp': True})
min_fun0.fun
min_fun0.x
>>> array([-1.87741758e-09, 1.00000000e+00, 1.00000000e+00, -3.04858536e-09])

Custom convergence criterion in scipy optimise

I am optimising a function using scipy.optimize in the following manner:
yEst=minimize(myFunction, y0, method='L-BFGS-B', tol=1e-6).x
My problem is that I don't want to stop simply when the tolerance is less than a value (e.g. if on the nth iteration stop is |y_n - y_(n-1)|<tol). Instead I have a slightly more complex function of y_n and y_(n-1), say tolFun, and I want to stop when tolFun(y_n, y_(n-1))<tol.
To give more detail my tolerance function is the following. It partitions y into chunks and then checks if any of the individual partitions have a norm difference within tolerance and, if any do, then the minimisation should stop.
# Takes in current and previous iteration values and a pre-specified fixed scalar r.
def tolFun(yprev,ycurr,r):
# The minimum norm so far (initialized to a big value)
minnorm = 5000
for i in np.arange(r):
# Work out the norm of the ith partition/block of entries
norm = np.linalg.norm(yprev[np.arange(r)+i*r],ycurr[np.arange(r)+i*r])
# Update minimum norm
minnorm = np.min(norm, minnorm)
return(minnorm)
My question is similar to this question here but differs in the fact that this user needed only the current iterations value of y, whereas my custom tolerance function needs both the current iterations value of y and the previous value. Does anyone know how I could do this?
You cannot do directly what you want since the callback function receives only the current parameter vector. To solve your problem you can modify second solution from https://stackoverflow.com/a/30365576/8033585 (which I prefer to the first solution that uses global) in the following way or so:
class Callback:
def __init__(self, tolfun, tol=1e-8):
self._tolf = tolfun
self._tol = tol
self._xk_prev = None
def __call__(self, xk):
if self._xk_prev is not None and self._tolf(xk, self._xk_prev) < self._tol:
return True
self._xk_prev = xk
return False
cb = Callback(tolfun=tolFun, tol=tol) # set tol here to control convergence
yEst = minimize(myFunction, y0, method='L-BFGS-B', tol=0, callback=cb)
or
yEst = optimize.minimize(
myFunction, y0, method='L-BFGS-B',
callback=cb, options={'gtol': 0, 'ftol': 0}
)
You can find available options for a solver/method using:
optimize.show_options('minimize', 'L-BFGS-B')

Generating and Solving Simultaneous ODE's in Python

I'm relatively new to Python, and am encountering some issues in writing a piece of code that generates and then solves a system of differential equations.
My approach to doing this was to create a set of variables and coefficients, (x0, x1, ..., xn) and (c0, c1 ,..., cn) repsectively, in a list with the function var(). Then the equations are constructed in EOM1(). The initial conditions, along with the set of equations, are all put together in EOM2() and solved using odeint.
Currently the code below runs, albeit not efficiently the reason for which I believe is because odeint runs through all the code with each interaction (that's something else I need to fix but isn't the main problem!).
import sympy as sy
from scipy.integrate import odeint
n=2
cn0list = [0.01, 0.05]
xn0list = [0.01, 0.01]
def var():
xnlist=[]
cnlist=[]
for i in range(n+1):
xnlist.append('x{0}'.format(i))
cnlist.append('c{0}'.format(i))
return xnlist, cnlist
def EOM1():
drdtlist=[]
for i in range(n):
cn1=sy.Symbol(var()[1][i])
xn0=sy.Symbol(var()[0][i])
xn1=sy.Symbol(var()[0][i+1])
eom=cn1*xn0*(1.0-xn1)-cn1*xn1-xn1
drdtlist.append(eom)
xi=sy.Symbol(var()[0][0])
xf=sy.Symbol(var()[0][n])
drdtlist[n-1]=drdtlist[n-1].subs(xf,xi)
return drdtlist
def EOM2(xn, t, cn):
x0, x1 = xn
c0, c1 = cn
f = EOM1()
output = []
for part in f:
output.append(part.evalf(subs={'x0':x0, 'x1':x1, 'c0':c0, 'c1':c1}))
return output
abserr = 1.0e-6
relerr = 1.0e-4
stoptime = 10.0
numpoints = 20
t = [stoptime * float(i) / (numpoints - 1) for i in range(numpoints)]
wsol = odeint(EOM2, xn0list, t, args=(cn0list,), atol=abserr, rtol=relerr)
My problem is that I had difficulty getting Python to treat the variables generated by Sympy appropriately. I got around this with the line
output.append(part.evalf(subs={'x0':x0, 'x1':x1, 'c0':c0, 'c1':c1}))
in EOM2(). Unfortunately, I do not know how to generalize this line to a list of variables from x0 to xn, and from c0 to cn. The same applies to the earlier line in EOM2(),
x0, x1 = xn
c0, c1 = cn
In other words I set n to an arbitrary number, is there a way for Python to interpret each element as it does with the ones I manually entered above? I have tried the following
output.append(part.evalf(subs={'x{0}'.format(j):var(n)[0][j], 'c{0}'.format(j):var(n)[1][j]}))
yet this yields the error that led me to use evalf in the first place,
TypeError: can't convert expression to float
Is there any way do what I want to, generate a set of n equations which are then solved with odeint?
Instead of using evalf you want to look into using sympy.lambdify to generate a callback for use with SciPy. You will need to create a function with the expected signature of odeint, e.g.:
y, params = sym.symbols('y:3'), sym.symbols('kf kb')
ydot = rhs(y, p=params)
f = sym.lambdify((y, t) + params, ydot)
yout = odeint(f, y0, tout, param_values)
We gave a tutorial on (among other things) how to use lambdify with odeint at the SciPy 2017 conference, the material is available here: http://www.sympy.org/scipy-2017-codegen-tutorial/
If you are open to use an external library to handle the function signatures of external solvers you may be interested in a library I've authored: pyodesys
If I understand correctly, you want to make an arbitrary number of substitutions in a SymPy expression. This is how it can be done:
n = 10
syms = sy.symbols('x0:{}'.format(n)) # an array of n symbols
expr = sum(syms) # some expression with those symbols
floats = [1/(j+1) for j in range(n)] # numbers to put in
expr.subs({symbol: value for symbol, value in zip(syms, floats)})
The result of subs is a float in this case (no evalf needed).
Note that the function symbols can directly create any number of symbols for you, via the colon notation. No need for a loop.

Scipy odeint giving index out of bounds errors

I am trying to solve a differential equation in python using Scipy's odeint function. The equation is of the form dy/dt = w(t) where w(t) = w1*(1+A*sin(w2*t)) for some parameters w1, w2, and A. The code I've written works for some parameters, but for others I get given index out of bound errors.
Here's some example code that works
import numpy as np
import scipy.integrate as integrate
t = np.arange(1000)
w1 = 2*np.pi
w2 = 0.016*np.pi
A = 1.0
w = w1*(1+A*np.sin(w2*t))
def f(y,t0):
return w[t0]
y = integrate.odeint(f,0,t)
Here's some example code that doesn't work
import numpy as np
import scipy.integrate as integrate
t = np.arange(1000)
w1 = 0.3*np.pi
w2 = 0.005*np.pi
A = 0.15
w = w1*(1+A*np.sin(w2*t))
def f(y,t0):
return w[t0]
y = integrate.odeint(f,0,t)
The only thing that changes between these is that the three parameters w1, w2, and A are smaller in the second, but the second one always gives me the following error
line 13, in f
return w[t0]
IndexError: index 1001 is out of bounds for axis 0 with size 1000
This error continues even after restarting python and running the second code first. I've tried with other parameters, some seem to work, but others give me different index out of bounds errors. Some say 1001 is out of bounds, some say 1000, some say 1008, ect.
Changing the initial condition on y (the second input for odeint, which I have as 0 on the above codes) also changes the number on the index error, so it might be that I'm misunderstanding what to put here. I wasn't told what the initial conditions should be other than that y is used as a phase of a signal, so I presumed it to be initially 0.
What you want to do is
def w(t):
return w1*(1+A*np.sin(w2*t))
def f(y,t0):
return w(t0)
Array indices are typically integers, time arguments and values of solutions of differential equations are typically real numbers. Thus there is some conceptual difficulty in invoking w[t0].
You might also try to integrate directly the function w, there is no inherent difficulty in this example.
As for coupled systems, you solve them as coupled systems.
def w(t):
return w1*(1+A*np.sin(w2*t))
def f(y,t):
wt = w(t)
return np.array([ wt, wt*sin(y[1]-y[0]) ])

Categories