Choose Argument position for scipy minimise - python

I have a function defined as follows
def function(a, b, c, d):
### some calculation
return output
I was to use scipy.minimise to minimize the output with respect to variable a alone. Is there a way to tell it to just solve for the first variable (while keeping b, c and d constant).
Thanks.

From docs:
The objective function to be minimized: fun(x, *args) -> float
and
argstuple, optional: Extra arguments passed to the objective function and its derivatives (fun, jac and hess functions).
So simply use a wrapper (or rewrite the definition and the first lines in your function):
def functionWrapper(x, args):
a = x
b,c,d = args
return function(a,b,c,d)
And the minimization call
minimize(functionWrapper,initialGuess,(bValue,cValue,dValue,));
where bValue,cValue,dValue are the constant values for the coefficients.

Related

How to define a function that returns the derivative to be later passed in a lambda function

Consider the following Python code
def Hubble_a(a):
...
return
def t_of_a(a):
res = np.zeros_like(a)
for i,ai in enumerate(a):
t,err = quad(lambda ap : 1.0/(ap*Hubble_a(ap)),0,ai)
res[i] = t
return res
a = np.logspace(-8,1,100)
What I want to do is to define a function Hubble_a(a) that gives the derivative of a divided by a, in order to integrate over it with quad. I tried to define it in this way:
def Hubble_a(a):
da = diff(a,1)
da_over_a = da/a
return da_over_a
where diff is the FFT derivative imported from scipy.fftpack. Then, if I execute t_of_a(a) I get a object of type 'float' has no len() error, presumably because quad doesn't take arrays? However I don't think this definition makes any sense in the first place because I want to pass a function such that lambda maps to 1.0/(ap*Hubble_a(ap) and know I'm passing the derivative of an array instead of a function that can then by integrated over. So I'm looking for help on how to implement a function that maps to something like (da/dt)/a.

sympy lambdify with function arguments in tuple?

Suppose you have computed fu as a result of a sympy calculation:
fu= sy.cos(x)+sy.sin(y)+1
where
x,y = sy.symbols("x y")
are symbols. Now you want to turn fu to a numpy function of (obviously) two variables.
You can do this by:
fun= sy.lambdify((x,y), fu, "numpy")
and you produce fun(x,y). Is there a way that lambdify can produce fun(z) with x,y=z, i.e produce the following function:
def fun(z):
x,y=z
return np.cos(x)+np.sin(y)+1
According to the documentation of lambdify you can nest the symbols in the first argument to denote unpacking in the signature:
import sympy as sym
x,y = sym.symbols('x y')
fu = sym.cos(x) + sym.sin(y) + 1
# original: signature f1(x, y)
f1 = sym.lambdify((x,y), fu)
f1(1, 2) # returns 2.4495997326938213
# nested: signature f2(z) where x,y = z
f2 = sym.lambdify([(x,y)], fu)
f2((1, 2)) # returns 2.4495997326938213
Even if this weren't possible to do within lambdify, we could define a thin wrapper that unpacks the arguments to the lambdified function (although this would be one function call slower on each call, so for fast functions that get called a lot of times this might lead to measurable impact on the runtime):
f = sym.lambdify((x,y), fu) # signature f(x,y)
def unpacking_f(z): # signature f(z) where x,y = z
return f(*z)
Of course if the function is not for a single, throw-away use in a numerical solver (such as curve fitting or minimization), it's good practice to use functools.wraps for wrappers. This would preserve the docstring automatically generated by lambdify.

Python: passing arguments to a function

I am using dlib's find_min_global function, an optimization algorithm which helps to find values which minimize the output of a function. For example
import dlib
def holder_table(x0,x1):
return -abs(sin(x0)*cos(x1)*exp(abs(1-sqrt(x0*x0+x1*x1)/pi)))
x,y = dlib.find_min_global(holder_table,
[-10,-10], # Lower bound constraints on x0 and x1 respectively
[10,10], # Upper bound constraints on x0 and x1 respectively
80) # The number of times find_min_global() will call holder_table()
Here the holder_table function returns the value that needs to be minimized for different values of x0 and x1.
Here the holder_table function takes in only the values that need to be optimized that is x0 and x1. But the function that I want to use with the dlib function takes more than x0 and x1. The function definiton looks like so
def holder_table(a,b,x0,x1):
return -abs(sin(b*x0/a)*cos(x1)*exp(abs(1-sqrt(x0*x0+x1*x1)/pi)))
The values a, b are not fixed and are the outputs of another function. Now, I can directly call the function the returns a, b inside the holder_table but I dont want to end up re-calculating them because each time holder_table is called a, b gets re-calculated and the process is time consuming.
How do I pass a, b to the holder_table function?
Your question is not 100% clear but it looks like you want a partial application. In Python this can be done using the dedicated functools.partial object, or quite simply with a closure (using either an inner function or lambda)
def holder_table(a,b,x0,x1):
return -abs(sin(b*x0/a)*cos(x1)*exp(abs(1-sqrt(x0*x0+x1*x1)/pi)))
def main():
a, b = some_heavy_function(...)
holder_table_partial = lambda ax, ay: holder_table(a, b, ax, ay)
x, y = dlib.find_min_global(
holder_table_partial, [-10,-10], [10,10], 80
)
Going only by your presentation of the specification, holder_table is a function that takes two arguments and returns the final result that can be used to help guide the optimization step. Also, if I understand correctly, a and b are components of the objective formula, but might take a while to compute and you don't want the computation of their logic to be called more frequently than necessary -- so including their derivation inside the holder_table seems inefficient.
What about something like:
def build_objective_function(a,b):
def holder_table(x0,x1):
return -abs(sin(b*x0/a)*cos(x1)*exp(abs(1-sqrt(x0*x0+x1*x1)/pi)))
return holder_table
And you'd call it like:
a = <compute a>
b = <compute b>
holder_table = build_objective_function(a,b) # holder_table will be a function
x,y = dlib.find_min_global(holder_table,
[-10,-10], # Lower bound constraints on x0 and x1 respectively
[10,10], # Upper bound constraints on x0 and x1 respectively
80) # The number of times find_min_global() will call holder_table()

Reformulation of a lambda function with functools.partial in Python

I currently have the following structure:
Inside a class I need to handle several types of functions with two special variables and an arbitrary number of parameters. To wrap these for the methods I apply them on I scan the function signatures first (that works very reliable) and decide what the parameters and what my variables are.
I then bind them back with a lambda expression in the following way. Let func(x, *args) be my function, then I'll bind
f = lambda x, t: func(x=x, **func_parameter)
In the case that I get func(x, t, *args) I bind
f = lambda x, t: func(x=x, t=t, **func_parameter)
and similar if I have neither variables.
It is essential that I hand a function of the form f(x,t) to my methods inside that class.
I would like to use functools.partial for that - it is the more pythonic way to do it and the performance when executing is better (the function f is potentially called a couple of million times...). The problem that I have is that I don't know what to do if I have a basis function which is independent of one of the variables t and x, that's why I went with lambda functions at all, they just map the other variable 'blind'. It's still two function calls and while definitions with lambda and partial take the same time, execution is a lot faster with partial.
Does anyone knoe how to use partial in that case? Performance is kind of an issue here.
EDIT: A little later. I figured out that function evaluation with tuple arguments are faster than with keyword arguments, so that was a plus.
And then, in the end, as a user I would just take some of the guess work from Python, i.e. directly define
def func(x):
return 2*x
instead of
def func(x, a):
return a*x
And call it directly. In that way I can use the function directly. Second case would be if I implement the case where x and t are both present as partial mapping.
That might be a compromise.
You could write adapter classes that have an f(x,t) call signature. The result is similar to functools.partial but much more flexible. __call__ gives you a consistent call signature and lets you add, drop, and map parameters. Arguments can be fixed when an instance is made. It seems like it should execute as fast as a normal function, but I have no basis for that.
A toy version:
class Adapt:
'''return a function with call signature f(x,t)'''
def __init__(self, func, **kwargs):
self.func = func
self.kwargs = kwargs
def __call__(self, x, t):
# mapping magic goes here
return self.func(x, t, **self.kwargs)
#return self.func(a=x, b=t, **self.kwargs)
def f(a, b, c):
print(a, b, c)
Usage:
>>> f_xt = Adapt(f, c = 4)
>>> f_xt(3, 4)
3 4 4
>>>
Don't know how you could make that generic for arbitrary parameters and mappings, maybe someone will chime in with an idea or an edit.
So if you end up writing an adapter specific to each function, the function can be embedded in the class instead of an instance parameter.
class AdaptF:
'''return a function with call signature f(x,t)'''
def __init__(self, **kwargs):
self.kwargs = kwargs
def __call__(self, x, t):
'''does stuff with x and t'''
# mapping magic goes here
return self.func(a=x, b=t, **self.kwargs)
def func(self, a, b, c):
print(a, b, c)
>>> f_xt = AdaptF(c = 4)
>>> f_xt(x = 3, t = 4)
3 4 4
>>>
I just kinda made this up from stuff I have read so I don't know if it is viable. I feel like I should give credit to the source I read but for the life of me I can't find it - I probably saw it on a pyvideo.
.

Python scipy.optimize.f_min_l_bfgs_b algorithm: how to have extra outputs?

I am using L-BFGS-B optimizer from Scipy package. Here is the documentation.
My code has the following structure. obj_func is the objective function that is used to minimize "a". In order to do so, obj_func should only return "a".
Question: Is there a way to obtain "b" and "c" from obj_func?
Currently I am using function attribute. But I am not sure this is the preferred way.
def obj_func(x, *args)
a, b, c = compute(x) # compute is where major computation happens
# I want to access b and c from outside the function
obj_func.result = [b,c] # use function attribute
return a
result = optimize.fmin_l_bfgs_b(obj_func, x0, bounds=bounds, args=args)
b, c = obj_func.result

Categories