Finding the maximum of a function - python

How do I find the maximum of a function in Python? I could try to hack together a derivative function and find the zero of that, but is there a method in numpy (or other library) that can do it for me?

You can use scipy.optimize.fmin on the negative of your function.
def f(x): return -2 * x**2 + 4 * x
max_x = scipy.optimize.fmin(lambda x: -f(x), 0)
# array([ 1.])

If your function is solvable analytically try SymPy. I'll use EMS's example above.
In [1]: from sympy import *
In [2]: x = Symbol('x', real=True)
In [3]: f = -2 * x**2 + 4*x
In [4]: fprime = f.diff(x)
In [5]: fprime
Out[5]: -4*x + 4
In [6]: solve(fprime, x) # solve fprime = 0 with respect to x
Out[6]: [1]
Of course, you'll still need to check that 1 is a maximizer and not a minimizer of f
In [7]: f.diff(x).diff(x) < 0
Out[7]: True

I think scipy.optimize.minimize_scalar and scipy.optimize.minimize are the preferred ways now, that give you access to the range of techniques, e.g.
solution = scipy.optimize.minimize_scalar(lambda x: -f(x), bounds=[0,1], method='bounded')
for a single variable function that must lie between 0 and 1.

You could try SymPy. SymPy might be able to provide you with the derivative symbolically, find its zeros, and so on.

Maximum of a function with parameters.
import scipy.optimize as opt
def get_function_max(f, *args):
"""
>>> round(get_function_max(lambda x, *a: 3.0-2.0*(x**2)), 2)
3.0
>>> round(get_function_max(lambda x, *a: 3.0-2.0*(x**2)-2.0*x), 2)
3.5
>>> round(get_function_max(lambda x, *a: a[0]-a[1]*(x**2)-a[1]*x, 3.0, 2.0), 2)
3.5
"""
def func(x, *arg):
return -f(x, *arg)
return f(opt.fmin(func, 0, args=args, disp=False)[0], *args)

Related

Solving nonlinear least-squares with function returning both value and jacobian

I am trying to speed up the solving of a nonlinear least-squares problem in Python. I can compute both the function value and the Jacobian via one forwardpass, (val, jac) = fun. A solver like scipy.optimize.least_squares only accepts two seperate functions, fun and jac, which for my problem means that the function value has to be computed twice per iteration (once in fun, and once in jac).
Is there a trick, for avoiding solving the primal problem twice?
The more general function scipy.optimize.minimize support the above style with the jac=True keyword, but it's slow for my problem.
I think the best approach would be to use the MemoizeJac decorator. This is exactly what is done under the hood of scipy.optimize.minimize for jac=True:
import numpy as np
from scipy.optimize import least_squares
from scipy.optimize._optimize import MemoizeJac
def fun_and_jac(x):
return x**2 - 5 * x + 3, 2 * x - 5
fun = MemoizeJac(fun_and_jac)
jac = fun.derivative
res = least_squares(fun, x0=0, jac=jac)
print(res)
You can do a bit of a hack:
val_cache = {}
jac_cache = {}
def val_fun(*args):
try:
return val_cache.pop(args)
except KeyError:
(val, jac) = fun(*args)
jac_cache[args] = jac
return val
def jac_fun(*args):
try:
return jac_cache.pop(args)
except KeyError:
(val, jac) = fun(*args)
val_cache[args] = val
return jac
From the documentation of scipy.optimize.minimize:
If jac is a Boolean and is True, fun is assumed to return a tuple (f, g) containing the objective function and the gradient.
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html?highlight=minimize
So you can simply do it like this:
from scipy.optimize import minimize
def function(x):
'''Function that returns both fun and jac'''
return x**2 - 5 * x + 3, 2 * x - 5
print(minimize(function, 0, jac=True))
Edit, reread your question, it seems this option also works for least_squares but is undocumented.
This works as well:
from scipy.optimize import least_squares
def function(x):
'''Function that returns both fun and jac'''
return x**2 - 5 * x + 3, 2 * x - 5
print(least_squares(function, 0, jac=True))

How to resolve integration function not integrating correctly?

I am trying to build a few simple operations such as a derivative and integral function to operate on lambda functions because sympy and scipy were struggling to integrate some things that I was passing to them.
The derivative function does not give me any issues and looks to return the derivative of the input function when plotted, but the integral function does not return the same, and does not plot the correct integral of the input.
import matplotlib.pyplot as plt
import numpy as np
from phys_func import func
sr = [-10,10]
x = np.linspace(sr[0],sr[1], 100)
F = lambda x: x**2
f = func.I(F,x)
plt.plot(x,F(x), label = 'F(x)')
plt.plot(x,f(x), label = 'f(x)')
plt.xlabel('x')
plt.ylabel('y')
plt.legend()
plt.show()
The integration function that does not work:
def I(F,x):
dx = (x[len(x)-1] - x[0])/len(x)
return lambda x : 0.5*( F(x+dx) + F(x) )*dx
The derivative function that works:
def d(f,x):
dx = (x[len(x)-1]-x[0])/len(x)
return lambda x: (f(x+dx)-f(x))/dx
Can anyone lend me a hand please?
You cannot find the anti derivative of a function numerically with out knowing the value of the anti derivative at a single point. Suppose if you fix the value of the antiderivative function value at x =a to be 0 (and given function is continuous from [a,x]) , then we can use definite integrals. For this function, let us take a=0 (i.e 0 is a root of anti derivative function), so you can do a definite integral from [0,x]. Also, your integration function is wrong. You need to sum all the 0.5*( F(x+dx) + F(x) )*dx elements from 0 to x to get the definite integral.
You can modify I(f,x) as follows
def I(F1): # N is number of intervals
return lambda x, N: np.sum( 0.5*( F1(np.linspace(0,x,num=N)+ (x)/N ) + F1(np.linspace(0,x,num=N)))*(x)/N)
In [1]: import numpy as np
In [2]: def I(F1): # N is number of intervals
...: return lambda x, N: np.sum( 0.5*( F1(np.linspace(0,x,num=N)+ (x)/N
...: ) + F1(np.linspace(0,x,num=N)))*(x)/N)
...:
In [3]: F = lambda x: x**2
In [4]: x_ran = np.linspace(-10,10, 100)
In [5]: y = I(F)
In [6]: y_ran = []
In [7]: for i in x_ran:
...: y_ran.append(y(i,100))
In [8]: plt.plot(x_ran,y_ran)
In [9]: plt.show()

Python ternary if on numpy array with condition on array cell values

I have a conditional function, say f(x), that takes domain values from a numpy.ndarray the_array and maps into another numpy.ndarray, result
f(x) = g(x) if x>0
h(x) otherwise
g(x) and h(x) here are some other functions.
Looks to me something like the following, but I don't know how to refer to the corresponding array entries in the ternary if:
result = g(the_array) if <??> else h(the_array)
If there's no problem valuating g(x) and h(x) for all of x, then
result = np.where( x>0, g(x), h(x))
If g can be evaluated only at x>0, we have to do more work. For example:
mask = x>0
result = h(x)
result[mask] = g(x[mask])
Some ufunc take where and out parameters that work like this. If g is a ufunc:
g(x, where=x>0, out = h(x))
result = the_array
for i in range(len(the_array)):
result[i] = g(the_array[i]) if the_array[i] > 0 else h(the_array[i])
you can use this array based operation assuming your g and h function can accept all values as input (meaning g does not through error/exception for non-positive values and h for positive values). The equation is quite self explanatory of the if statement in question:
result = g(x)*(x>0) + h(x)*(x<=0)
And if your g function only accepts positives and h function only accepts non-positives, you can mask array x and do operations and merge them like this:
idx_p = np.argwhere(x>0)
idx_np = np.argwhere(x<=0)
result = np.zeros_like(x)
result[idx_p] = g(x[x>0].reshape(-1, 1))
result[idx_np] = h(x[x<=0].reshape(-1, 1))
example code:
x = np.array([-1,1,-2,2])
def g(x):
return x**2
def h(x):
return x
result = g(x)*(x>0) + h(x)*(x<=0)
output:
[-1 1 -2 4]
As long as your function is quick to evaluate, you can use numpy.where:
import numpy as np
def g(x):
return x * 2
def h(x):
return x * 10
x = np.arange(-5, 5)
result = np.where(x > 0, g(x), h(x))
After this, result is
array([-50, -40, -30, -20, -10, 0, 2, 4, 6, 8])

Why does sympy.diff not differentiate sympy polynomials as expected?

I am trying to figure out why sympy.diff does not differentiate sympy polynomials as expected. Normally, sympy.diff works just fine if a symbolic variable is defined and the polynomial is NOT defined using sympy.Poly. However, if the function is defined using sympy.Poly, sympy.diff does not seem to actually compute the derivative. Below is a code sample that shows what I mean:
import sympy as sy
# define symbolic variables
x = sy.Symbol('x')
y = sy.Symbol('y')
# define function WITHOUT using sy.Poly
f1 = x + 1
# define function WITH using sy.Poly
f2 = sy.Poly(x + 1, x, domain='QQ')
# compute derivatives and return results
df1 = sy.diff(f1,x)
df2 = sy.diff(f2,x)
print('f1: ',f1)
print('f2: ',f2)
print('df1: ',df1)
print('df2: ',df2)
This prints the following results:
f1: x + 1
f2: Poly(x + 1, x, domain='QQ')
df1: 1
df2: Derivative(Poly(x + 1, x, domain='QQ'), x)
Why does sympy.diff not know how to differentiate the sympy.Poly version of the polynomial? Is there a way to differentiate the sympy polynomial, or a way to convert the sympy polynomial to the form that allows it to be differentiated?
Note: I tried with different domains (i.e., domain='RR' instead of domain='QQ'), and the output does not change.
This appears to be a bug. You can get around it by calling diff directly on the Poly instance. Ideally calling the function diff from the top level sympy module should yield the same result as calling the method diff.
In [1]: from sympy import *
In [2]: from sympy.abc import x
In [3]: p = Poly(x+1, x, domain='QQ')
In [4]: p.diff(x)
Out[4]: Poly(1, x, domain='QQ')
In [5]: diff(p, x)
Out[5]: Derivative(Poly(x + 1, x, domain='QQ'), x)
In [6]: diff(p, x).doit()
Out[6]: Derivative(Poly(x + 1, x, domain='ZZ'), x)

Using fsolve with scipy function

I have encountered the following problem with scipy.fsolve, but I don't what to do:
U = 0.00043
ThC =1.19
Dist = 7
IncT = 0.2
pcw = 1180000
k = 1.19
B = U * pcw / (2 * k)
fugato = fsolve((((Ql/(2*math.pi* k))*math.exp(B * x)*special.kv(0, B * x))-IncT),0.01)
print fugato
I get the error TypeError: 'numpy.float64' object is not callable in fsolve.
How do I fix this problem?
The argument to fsolve must be a function.
I presume that you want to solve your equation for x? If so, writing:
fugato = fsolve(lambda x: Ql/(2*math.pi* k)*math.exp(B * x)*special.kv(0, B * x)-IncT,
0.01)
works.
To explain what's going on here, the construct lambda x: 2*x is a function definition. It is similar to writing:
def f(x):
return 2*x
The lambda construction is commonly used to define functions that you only need once. This is often the case when registering callbacks, or to represent a mathematical expression. For instance, if you wanted to integrate f(x) = 2*x, you could write:
from scipy.integrate import quad
integral = quad(lambda x: 2*x, 0., 3.)
Similarly, if you want to solve 2*x = 1, you can write:
from scipy.optimize import fsolve
fsolve(lambda x: 2*x-1, 0.)

Categories