I am using dlib's find_min_global function, an optimization algorithm which helps to find values which minimize the output of a function. For example
import dlib
def holder_table(x0,x1):
return -abs(sin(x0)*cos(x1)*exp(abs(1-sqrt(x0*x0+x1*x1)/pi)))
x,y = dlib.find_min_global(holder_table,
[-10,-10], # Lower bound constraints on x0 and x1 respectively
[10,10], # Upper bound constraints on x0 and x1 respectively
80) # The number of times find_min_global() will call holder_table()
Here the holder_table function returns the value that needs to be minimized for different values of x0 and x1.
Here the holder_table function takes in only the values that need to be optimized that is x0 and x1. But the function that I want to use with the dlib function takes more than x0 and x1. The function definiton looks like so
def holder_table(a,b,x0,x1):
return -abs(sin(b*x0/a)*cos(x1)*exp(abs(1-sqrt(x0*x0+x1*x1)/pi)))
The values a, b are not fixed and are the outputs of another function. Now, I can directly call the function the returns a, b inside the holder_table but I dont want to end up re-calculating them because each time holder_table is called a, b gets re-calculated and the process is time consuming.
How do I pass a, b to the holder_table function?
Your question is not 100% clear but it looks like you want a partial application. In Python this can be done using the dedicated functools.partial object, or quite simply with a closure (using either an inner function or lambda)
def holder_table(a,b,x0,x1):
return -abs(sin(b*x0/a)*cos(x1)*exp(abs(1-sqrt(x0*x0+x1*x1)/pi)))
def main():
a, b = some_heavy_function(...)
holder_table_partial = lambda ax, ay: holder_table(a, b, ax, ay)
x, y = dlib.find_min_global(
holder_table_partial, [-10,-10], [10,10], 80
)
Going only by your presentation of the specification, holder_table is a function that takes two arguments and returns the final result that can be used to help guide the optimization step. Also, if I understand correctly, a and b are components of the objective formula, but might take a while to compute and you don't want the computation of their logic to be called more frequently than necessary -- so including their derivation inside the holder_table seems inefficient.
What about something like:
def build_objective_function(a,b):
def holder_table(x0,x1):
return -abs(sin(b*x0/a)*cos(x1)*exp(abs(1-sqrt(x0*x0+x1*x1)/pi)))
return holder_table
And you'd call it like:
a = <compute a>
b = <compute b>
holder_table = build_objective_function(a,b) # holder_table will be a function
x,y = dlib.find_min_global(holder_table,
[-10,-10], # Lower bound constraints on x0 and x1 respectively
[10,10], # Upper bound constraints on x0 and x1 respectively
80) # The number of times find_min_global() will call holder_table()
Related
Consider the following Python code
def Hubble_a(a):
...
return
def t_of_a(a):
res = np.zeros_like(a)
for i,ai in enumerate(a):
t,err = quad(lambda ap : 1.0/(ap*Hubble_a(ap)),0,ai)
res[i] = t
return res
a = np.logspace(-8,1,100)
What I want to do is to define a function Hubble_a(a) that gives the derivative of a divided by a, in order to integrate over it with quad. I tried to define it in this way:
def Hubble_a(a):
da = diff(a,1)
da_over_a = da/a
return da_over_a
where diff is the FFT derivative imported from scipy.fftpack. Then, if I execute t_of_a(a) I get a object of type 'float' has no len() error, presumably because quad doesn't take arrays? However I don't think this definition makes any sense in the first place because I want to pass a function such that lambda maps to 1.0/(ap*Hubble_a(ap) and know I'm passing the derivative of an array instead of a function that can then by integrated over. So I'm looking for help on how to implement a function that maps to something like (da/dt)/a.
I have a function defined as follows
def function(a, b, c, d):
### some calculation
return output
I was to use scipy.minimise to minimize the output with respect to variable a alone. Is there a way to tell it to just solve for the first variable (while keeping b, c and d constant).
Thanks.
From docs:
The objective function to be minimized: fun(x, *args) -> float
and
argstuple, optional: Extra arguments passed to the objective function and its derivatives (fun, jac and hess functions).
So simply use a wrapper (or rewrite the definition and the first lines in your function):
def functionWrapper(x, args):
a = x
b,c,d = args
return function(a,b,c,d)
And the minimization call
minimize(functionWrapper,initialGuess,(bValue,cValue,dValue,));
where bValue,cValue,dValue are the constant values for the coefficients.
Suppose you have computed fu as a result of a sympy calculation:
fu= sy.cos(x)+sy.sin(y)+1
where
x,y = sy.symbols("x y")
are symbols. Now you want to turn fu to a numpy function of (obviously) two variables.
You can do this by:
fun= sy.lambdify((x,y), fu, "numpy")
and you produce fun(x,y). Is there a way that lambdify can produce fun(z) with x,y=z, i.e produce the following function:
def fun(z):
x,y=z
return np.cos(x)+np.sin(y)+1
According to the documentation of lambdify you can nest the symbols in the first argument to denote unpacking in the signature:
import sympy as sym
x,y = sym.symbols('x y')
fu = sym.cos(x) + sym.sin(y) + 1
# original: signature f1(x, y)
f1 = sym.lambdify((x,y), fu)
f1(1, 2) # returns 2.4495997326938213
# nested: signature f2(z) where x,y = z
f2 = sym.lambdify([(x,y)], fu)
f2((1, 2)) # returns 2.4495997326938213
Even if this weren't possible to do within lambdify, we could define a thin wrapper that unpacks the arguments to the lambdified function (although this would be one function call slower on each call, so for fast functions that get called a lot of times this might lead to measurable impact on the runtime):
f = sym.lambdify((x,y), fu) # signature f(x,y)
def unpacking_f(z): # signature f(z) where x,y = z
return f(*z)
Of course if the function is not for a single, throw-away use in a numerical solver (such as curve fitting or minimization), it's good practice to use functools.wraps for wrappers. This would preserve the docstring automatically generated by lambdify.
I am using L-BFGS-B optimizer from Scipy package. Here is the documentation.
My code has the following structure. obj_func is the objective function that is used to minimize "a". In order to do so, obj_func should only return "a".
Question: Is there a way to obtain "b" and "c" from obj_func?
Currently I am using function attribute. But I am not sure this is the preferred way.
def obj_func(x, *args)
a, b, c = compute(x) # compute is where major computation happens
# I want to access b and c from outside the function
obj_func.result = [b,c] # use function attribute
return a
result = optimize.fmin_l_bfgs_b(obj_func, x0, bounds=bounds, args=args)
b, c = obj_func.result
In my problem, I need to update the args value inside the cost function, but the args is a function argument and also has the tuple structure. I was wondering is there a way to change the element of args and update it to use it by jac function? For example, in the below codes
paraList = [detValVec, projTrans, MeasVec,
coeMat, resVec, absCoeRec]
res = optimize.minimize(costFunc, x0, args=(paraList,), method='BFGS', jac=gradientFunc, options={'gtol': 1e-6, 'disp': True})
def costFunc(x0,arg):
para = list(arg)
para[3], para[4], para[5] = forwardModelFunc(para[0], para[1], para[2])
return para[5]
I would like to update para[3], para[4], para[5] in the args argument.
In order to minimize costFunc you must be able to vary the input parameters (otherwise it'll always have the same value!). The optimize.minimize function will vary ("update") the x but it will leave args untouched as it calls costFunc, which means that your paraList should really be given as x, not args.
Since costFunc depends only on the first three values in your parameter list para[:3], updating the last three para[3:] will have no effect, so you can use x = para[:3] and args = para[3:]. In fact, you don't even need args at all, since it has no effect.
Something like:
paraList = [detValVec, projTrans, MeasVec, coeMat, resVec, absCoeRec]
def costFunc(x):
out = forwardModelFunc(x[0], x[1], x[2])
return out[2]
x0 = paraList[:3] # the initial guess
res = optimize.minimize(costFunc, x0, method='BFGS', jac=gradientFunc,
options={'gtol': 1e-6, 'disp': True})
So the optimal result you'll get (returned in res.x) will be the best values for the first three parameters in paraList: detValVec, projTrans, and MeasVec. If you want to get the last three values that they imply, you can just call forwardModelFunc on res.x:
paraList_opt = list(res.x) + list(forwardModelFunc(*res.x)
Of course, it's important to understand the limitations of optimize.minimize: it can only minimize over an array x if it is a 1d array of scalars, so hopefully the values in your paramList are scalars. If not, you'll have to flatten and concatenate them. Also, it will pass the same x and args to the jacobian gradientFunc, so be sure that it is properly formatted as well.