In my problem, I need to update the args value inside the cost function, but the args is a function argument and also has the tuple structure. I was wondering is there a way to change the element of args and update it to use it by jac function? For example, in the below codes
paraList = [detValVec, projTrans, MeasVec,
coeMat, resVec, absCoeRec]
res = optimize.minimize(costFunc, x0, args=(paraList,), method='BFGS', jac=gradientFunc, options={'gtol': 1e-6, 'disp': True})
def costFunc(x0,arg):
para = list(arg)
para[3], para[4], para[5] = forwardModelFunc(para[0], para[1], para[2])
return para[5]
I would like to update para[3], para[4], para[5] in the args argument.
In order to minimize costFunc you must be able to vary the input parameters (otherwise it'll always have the same value!). The optimize.minimize function will vary ("update") the x but it will leave args untouched as it calls costFunc, which means that your paraList should really be given as x, not args.
Since costFunc depends only on the first three values in your parameter list para[:3], updating the last three para[3:] will have no effect, so you can use x = para[:3] and args = para[3:]. In fact, you don't even need args at all, since it has no effect.
Something like:
paraList = [detValVec, projTrans, MeasVec, coeMat, resVec, absCoeRec]
def costFunc(x):
out = forwardModelFunc(x[0], x[1], x[2])
return out[2]
x0 = paraList[:3] # the initial guess
res = optimize.minimize(costFunc, x0, method='BFGS', jac=gradientFunc,
options={'gtol': 1e-6, 'disp': True})
So the optimal result you'll get (returned in res.x) will be the best values for the first three parameters in paraList: detValVec, projTrans, and MeasVec. If you want to get the last three values that they imply, you can just call forwardModelFunc on res.x:
paraList_opt = list(res.x) + list(forwardModelFunc(*res.x)
Of course, it's important to understand the limitations of optimize.minimize: it can only minimize over an array x if it is a 1d array of scalars, so hopefully the values in your paramList are scalars. If not, you'll have to flatten and concatenate them. Also, it will pass the same x and args to the jacobian gradientFunc, so be sure that it is properly formatted as well.
Related
so the way the FTestAnovaPower function works is, it takes 5 parameter but one need to be left as None and the function calculates whichever argument is left as None. My problems is, i want to generate a list of values for one parameter (the one thats None) while keeping the other values constant, hence the for loop.
`
class smpl_pwr(statsmodels.stats.power.FTestAnovaPower):
def __init__(self, effect_size=None, nobs=None, alpha=None, power=None, k_groups=None, vals = None):
#super().__init__(effect_size, nobs, alpha, power, k_groups)
X_func = statsmodels.stats.power.FTestAnovaPower()
self.effect_size = effect_size
self.nobs = nobs
self.alpha = alpha
self.power = power
self.k_groups = k_groups
self.vals = vals
def new_lst(self):
smpl_size = []
for x in vals:
smpl_size.append(X_func.solve_power(self, effect_size, nobs, alpha, k_groups, power=x))
return smpl_size
`
I want to be able to generate a list or an array of values but all i get memory address.
i am new into python and my task is to minimize math functions which have 3 return values (as provided in a template I must use), but I only need the first one of these returns. Here is an example
```
def exponential_function(x):
value = -np.exp(-0.5 * (x[0]**2 + x[1]**2))
grad = np.array([-value * x[0], -value * x[1]])
return value, grad, np.array([0,0])
```
this has to be the first argument of optimize.minimize. This would work for only one return (=value), but in this case I have no idea. I tried wrapper functions, which I failed.
Thank you in advance
A function object suitable as the first argument of optimize.minimize which takes the first one of these returns is:
lambda x: exponential_function(x)[0]
What kind(s) of wrapper did you try. You don't need anything fancy, just something that calls the given function, but returns only the first result, value:
def exponential_function(x):
value = -np.exp(-0.5 * (x[0]**2 + x[1]**2))
grad = np.array([-value * x[0], -value * x[1]])
return value, grad, np.array([0,0])
def myfunc(x):
value, grad, arr = exponential_function(x)
return value
You can use lambda as suggesting in other answers, but I tried to make a more explicit wrapper function, that might be easier to understand.
When we ask what you tried, we don't expect working tries. We want to see what you try, and get a better idea of what you understand (or are missing). The goal is to get you to think, and where possible end up solving your own problems, not to spoon feed answers.
Hey you can just call the function with three variables to store the returned values, e.g.:
value_return, grad_return, array_return = exponential_function(x)
So every return is stored in the appropriate variable. Afterwards you can use these variables (outside of the function!).
Alternatively just delete the other returns, that you do not need.
Does this answer your question?
Consider the following Python code
def Hubble_a(a):
...
return
def t_of_a(a):
res = np.zeros_like(a)
for i,ai in enumerate(a):
t,err = quad(lambda ap : 1.0/(ap*Hubble_a(ap)),0,ai)
res[i] = t
return res
a = np.logspace(-8,1,100)
What I want to do is to define a function Hubble_a(a) that gives the derivative of a divided by a, in order to integrate over it with quad. I tried to define it in this way:
def Hubble_a(a):
da = diff(a,1)
da_over_a = da/a
return da_over_a
where diff is the FFT derivative imported from scipy.fftpack. Then, if I execute t_of_a(a) I get a object of type 'float' has no len() error, presumably because quad doesn't take arrays? However I don't think this definition makes any sense in the first place because I want to pass a function such that lambda maps to 1.0/(ap*Hubble_a(ap) and know I'm passing the derivative of an array instead of a function that can then by integrated over. So I'm looking for help on how to implement a function that maps to something like (da/dt)/a.
I am using dlib's find_min_global function, an optimization algorithm which helps to find values which minimize the output of a function. For example
import dlib
def holder_table(x0,x1):
return -abs(sin(x0)*cos(x1)*exp(abs(1-sqrt(x0*x0+x1*x1)/pi)))
x,y = dlib.find_min_global(holder_table,
[-10,-10], # Lower bound constraints on x0 and x1 respectively
[10,10], # Upper bound constraints on x0 and x1 respectively
80) # The number of times find_min_global() will call holder_table()
Here the holder_table function returns the value that needs to be minimized for different values of x0 and x1.
Here the holder_table function takes in only the values that need to be optimized that is x0 and x1. But the function that I want to use with the dlib function takes more than x0 and x1. The function definiton looks like so
def holder_table(a,b,x0,x1):
return -abs(sin(b*x0/a)*cos(x1)*exp(abs(1-sqrt(x0*x0+x1*x1)/pi)))
The values a, b are not fixed and are the outputs of another function. Now, I can directly call the function the returns a, b inside the holder_table but I dont want to end up re-calculating them because each time holder_table is called a, b gets re-calculated and the process is time consuming.
How do I pass a, b to the holder_table function?
Your question is not 100% clear but it looks like you want a partial application. In Python this can be done using the dedicated functools.partial object, or quite simply with a closure (using either an inner function or lambda)
def holder_table(a,b,x0,x1):
return -abs(sin(b*x0/a)*cos(x1)*exp(abs(1-sqrt(x0*x0+x1*x1)/pi)))
def main():
a, b = some_heavy_function(...)
holder_table_partial = lambda ax, ay: holder_table(a, b, ax, ay)
x, y = dlib.find_min_global(
holder_table_partial, [-10,-10], [10,10], 80
)
Going only by your presentation of the specification, holder_table is a function that takes two arguments and returns the final result that can be used to help guide the optimization step. Also, if I understand correctly, a and b are components of the objective formula, but might take a while to compute and you don't want the computation of their logic to be called more frequently than necessary -- so including their derivation inside the holder_table seems inefficient.
What about something like:
def build_objective_function(a,b):
def holder_table(x0,x1):
return -abs(sin(b*x0/a)*cos(x1)*exp(abs(1-sqrt(x0*x0+x1*x1)/pi)))
return holder_table
And you'd call it like:
a = <compute a>
b = <compute b>
holder_table = build_objective_function(a,b) # holder_table will be a function
x,y = dlib.find_min_global(holder_table,
[-10,-10], # Lower bound constraints on x0 and x1 respectively
[10,10], # Upper bound constraints on x0 and x1 respectively
80) # The number of times find_min_global() will call holder_table()
I am trying to teach myself Python by working through some problems I came up with, and I need some help understanding how to pass functions.
Let's say I am trying to predict tomorrow's temperature based on today's and yesterday's temperature, and I have written the following function:
def predict_temp(temp_today, temp_yest, k1, k2):
return k1*temp_today + k2*temp_yest
And I have also written an error function to compare a list of predicted temperatures with actual temperatures and return the mean absolute error:
def mean_abs_error(predictions, expected):
return sum([abs(x - y) for (x,y) in zip(predictions,expected)]) / float(len(predictions))
Now if I have a list of daily temperatures for some interval in the past, I can see how my prediction function would have done with specific k1 and k2 parameters like this:
>>> past_temps = [41, 35, 37, 42, 48, 30, 39, 42, 33]
>>> pred_temps = [predict_temp(past_temps[i-1],past_temps[i-2],0.5,0.5) for i in xrange(2,len(past_temps))]
>>> print pred_temps
[38.0, 36.0, 39.5, 45.0, 39.0, 34.5, 40.5]
>>> print mean_abs_error(pred_temps, past_temps[2:])
6.5
But how do I design a function to minimize my parameters k1 and k2 of my predict_temp function given an error function and my past_temps data?
Specifically I would like to write a function minimize(args*) that takes a prediction function, an error function, some training data, and that uses some search/optimization method (gradient descent for example) to estimate and return the values of k1 and k2 that minimize my error given the data?
I am not asking how to implement the optimization method. Assume I can do that. Rather, I would just like to know how to pass my predict and error functions (and my data) to my minimize function, and how to tell my minimize function that it should optimize the parameters k1 and k2, so that my minimize function can automatically search a bunch of different settings of k1 and k2, applying my prediction function with those parameters each time to the data and computing error (like I did manually for k1=0.5 and k2=0.5 above) and then return the best results.
I would like to be able to pass these functions so I can easily swap in different prediction and error functions (differing by more than just parameter settings that is). Each prediction function might have a different number of free parameters.
My minimize function should look something like this, but I don't know how to proceed:
def minimize(prediction_function, which_args_to_optimize, error_function, data):
# 1: guess initial parameters
# 2: apply prediction function with current parameters to data to compute predictions
# 3: use error function to compute error between predictions and data
# 4: if stopping criterion is met, return parameters
# 5: update parameters
# 6: GOTO 2
Edit: It's that easy?? This is no fun. I am going back to Java.
On a more serious note, I think I was also getting hung up on how to use different prediction functions with different numbers of parameters to tune. If I just take all the free parameters in as one tuple I can keep the form of the function the same so it easy to pass and use.
Here is an example of how to pass a function into another function. apply_func_to will take a function f and a number num as parameters and return f(num).
def my_func(x):
return x*x
def apply_func_to(f, num):
return f(num)
>>>apply_func_to(my_func, 2)
4
If you wanna be clever you can use lambda (anonymous functions too). These allow you to pass functions "on the fly" without having to define them separately
>>>apply_func_to(lambda x:x*x, 3)
9
Hope this helps.
Function passing in Python is easy, you just use the name of the function as a variable which contains the function itself.
def predict(...):
...
minimize(predict, ..., mean_abs_error, ...)
As for the rest of the question: I'd suggest looking at the way SciPy implements this as a model. Basically, they have a function leastsq which minimizes the sum of the squares of the residuals (I presume you know what least-squares minimization is ;-). What you pass to leastsq is a function to compute the residuals, initial guesses for the parameters, and an arbitrary parameter which gets passed on to your residual-computing function (the closure), which includes the data:
# params will be an array of your k's, i.e. [k1, k2]
def residuals(params, measurements, times):
return predict(params, times) - measurements
leastsq(residuals, initial_parameters, args = (measurements, times))
Note that SciPy doesn't actually concern itself with how you come up with the residuals. The measurements array is just passed unaltered to your residuals function.
I can look up an example I did recently if you want more information - or you can find examples online, of course, but in my experience they're not quite as clear. The particular bit of code I wrote would relate well to your scenario.
As David and and Il-Bhima note, functions can be passed into other functions just like any other type of object. When you pass a function in, you simply call it like you ordinarily would. People sometimes refer to this ability by saying that functions are first class in Python. At a slightly greater level of detail, you should think of functions in Python as being one type of callable object. Another important type of callable object in Python is class objects; in this case, calling a class object creates an instance of that object. This concept is discussed in detail here.
Generically, you will probably want to leverage the positional and/or keyword argument feature of Python, as described here. This will allow you to write a generic
minimizer that can minimize prediction functions taking different sets of parameters. I've written an example---it's more complicated than I'd like (uses generators!) but it works for prediction functions with arbitrary parameters. I've glossed over a few details, but this should get you started:
def predict(data, k1=None, k2=None):
"""Make the prediction."""
pass
def expected(data):
"""Expected results from data."""
pass
def mean_abs_err(pred, exp):
"""Compute mean absolute error."""
pass
def gen_args(pred_args, args_to_opt):
"""Update prediction function parameters.
pred_args : a dict to update
args_to_opt : a dict of arguments/iterables to apply to pred_args
This is a generator that updates a number of variables
over a given numerical range. Equivalent to itertools.product.
"""
base_args = pred_args.copy() #don't modify input
argnames = args_to_opt.keys()
argvals = args_to_opt.values()
result = [[]]
# Generate the results
for argv in argvals:
result = [x+[y] for x in result for y in argv]
for prod in result:
base_args.update(zip(argnames, prod))
yield base_args
def minimize(pred_fn, pred_args, args_to_opt, err_fn, data):
"""Minimize pred_fn(data) over a set of parameters.
pred_fn : function used to make predictions
pred_args : dict of keyword arguments to pass to pred_fn
args_to_opt : a dict of arguments/iterables to apply to pred_args
err_fn : function used to compute error
data : data to use in the optimization
Returns a tuple (error, parameters) of the best set of input parameters.
"""
results = []
for new_args in gen_args(pred_args, args_to_opt):
pred = pred_fn(data, **new_args) # Unpack dictionary
err = err_fn(pred, expected(data))
results.append((err, new_args))
return sorted(results)[0]
const_args = {k1: 1}
opt_args = {k2: range(10)}
data = [] # Whatever data you like.
minimize(predict, const_args, opt_args, mean_abs_err, data)