Need help understanding function passing in Python - python

I am trying to teach myself Python by working through some problems I came up with, and I need some help understanding how to pass functions.
Let's say I am trying to predict tomorrow's temperature based on today's and yesterday's temperature, and I have written the following function:
def predict_temp(temp_today, temp_yest, k1, k2):
return k1*temp_today + k2*temp_yest
And I have also written an error function to compare a list of predicted temperatures with actual temperatures and return the mean absolute error:
def mean_abs_error(predictions, expected):
return sum([abs(x - y) for (x,y) in zip(predictions,expected)]) / float(len(predictions))
Now if I have a list of daily temperatures for some interval in the past, I can see how my prediction function would have done with specific k1 and k2 parameters like this:
>>> past_temps = [41, 35, 37, 42, 48, 30, 39, 42, 33]
>>> pred_temps = [predict_temp(past_temps[i-1],past_temps[i-2],0.5,0.5) for i in xrange(2,len(past_temps))]
>>> print pred_temps
[38.0, 36.0, 39.5, 45.0, 39.0, 34.5, 40.5]
>>> print mean_abs_error(pred_temps, past_temps[2:])
6.5
But how do I design a function to minimize my parameters k1 and k2 of my predict_temp function given an error function and my past_temps data?
Specifically I would like to write a function minimize(args*) that takes a prediction function, an error function, some training data, and that uses some search/optimization method (gradient descent for example) to estimate and return the values of k1 and k2 that minimize my error given the data?
I am not asking how to implement the optimization method. Assume I can do that. Rather, I would just like to know how to pass my predict and error functions (and my data) to my minimize function, and how to tell my minimize function that it should optimize the parameters k1 and k2, so that my minimize function can automatically search a bunch of different settings of k1 and k2, applying my prediction function with those parameters each time to the data and computing error (like I did manually for k1=0.5 and k2=0.5 above) and then return the best results.
I would like to be able to pass these functions so I can easily swap in different prediction and error functions (differing by more than just parameter settings that is). Each prediction function might have a different number of free parameters.
My minimize function should look something like this, but I don't know how to proceed:
def minimize(prediction_function, which_args_to_optimize, error_function, data):
# 1: guess initial parameters
# 2: apply prediction function with current parameters to data to compute predictions
# 3: use error function to compute error between predictions and data
# 4: if stopping criterion is met, return parameters
# 5: update parameters
# 6: GOTO 2
Edit: It's that easy?? This is no fun. I am going back to Java.
On a more serious note, I think I was also getting hung up on how to use different prediction functions with different numbers of parameters to tune. If I just take all the free parameters in as one tuple I can keep the form of the function the same so it easy to pass and use.

Here is an example of how to pass a function into another function. apply_func_to will take a function f and a number num as parameters and return f(num).
def my_func(x):
return x*x
def apply_func_to(f, num):
return f(num)
>>>apply_func_to(my_func, 2)
4
If you wanna be clever you can use lambda (anonymous functions too). These allow you to pass functions "on the fly" without having to define them separately
>>>apply_func_to(lambda x:x*x, 3)
9
Hope this helps.

Function passing in Python is easy, you just use the name of the function as a variable which contains the function itself.
def predict(...):
...
minimize(predict, ..., mean_abs_error, ...)
As for the rest of the question: I'd suggest looking at the way SciPy implements this as a model. Basically, they have a function leastsq which minimizes the sum of the squares of the residuals (I presume you know what least-squares minimization is ;-). What you pass to leastsq is a function to compute the residuals, initial guesses for the parameters, and an arbitrary parameter which gets passed on to your residual-computing function (the closure), which includes the data:
# params will be an array of your k's, i.e. [k1, k2]
def residuals(params, measurements, times):
return predict(params, times) - measurements
leastsq(residuals, initial_parameters, args = (measurements, times))
Note that SciPy doesn't actually concern itself with how you come up with the residuals. The measurements array is just passed unaltered to your residuals function.
I can look up an example I did recently if you want more information - or you can find examples online, of course, but in my experience they're not quite as clear. The particular bit of code I wrote would relate well to your scenario.

As David and and Il-Bhima note, functions can be passed into other functions just like any other type of object. When you pass a function in, you simply call it like you ordinarily would. People sometimes refer to this ability by saying that functions are first class in Python. At a slightly greater level of detail, you should think of functions in Python as being one type of callable object. Another important type of callable object in Python is class objects; in this case, calling a class object creates an instance of that object. This concept is discussed in detail here.
Generically, you will probably want to leverage the positional and/or keyword argument feature of Python, as described here. This will allow you to write a generic
minimizer that can minimize prediction functions taking different sets of parameters. I've written an example---it's more complicated than I'd like (uses generators!) but it works for prediction functions with arbitrary parameters. I've glossed over a few details, but this should get you started:
def predict(data, k1=None, k2=None):
"""Make the prediction."""
pass
def expected(data):
"""Expected results from data."""
pass
def mean_abs_err(pred, exp):
"""Compute mean absolute error."""
pass
def gen_args(pred_args, args_to_opt):
"""Update prediction function parameters.
pred_args : a dict to update
args_to_opt : a dict of arguments/iterables to apply to pred_args
This is a generator that updates a number of variables
over a given numerical range. Equivalent to itertools.product.
"""
base_args = pred_args.copy() #don't modify input
argnames = args_to_opt.keys()
argvals = args_to_opt.values()
result = [[]]
# Generate the results
for argv in argvals:
result = [x+[y] for x in result for y in argv]
for prod in result:
base_args.update(zip(argnames, prod))
yield base_args
def minimize(pred_fn, pred_args, args_to_opt, err_fn, data):
"""Minimize pred_fn(data) over a set of parameters.
pred_fn : function used to make predictions
pred_args : dict of keyword arguments to pass to pred_fn
args_to_opt : a dict of arguments/iterables to apply to pred_args
err_fn : function used to compute error
data : data to use in the optimization
Returns a tuple (error, parameters) of the best set of input parameters.
"""
results = []
for new_args in gen_args(pred_args, args_to_opt):
pred = pred_fn(data, **new_args) # Unpack dictionary
err = err_fn(pred, expected(data))
results.append((err, new_args))
return sorted(results)[0]
const_args = {k1: 1}
opt_args = {k2: range(10)}
data = [] # Whatever data you like.
minimize(predict, const_args, opt_args, mean_abs_err, data)

Related

Scipy.optimize.mininize get number of function evaluations

I would like to store the number of function evaluations (Fevals) made by a Scipy optimization algorithm in an external variable to count the final number of evaluations made by the entire program (Scipy is repeated many times).
You can extract it out from the optimization object and add the value at every outer Scipy call. In order to see how to get the number of function evaluations please see this scipy documentation. You can refer to the examples on the same page to inspect an example.
In case this does not help, you may wrap your cost function and try something like this:
class F(object):
def __init__(self, fn):
self.n_calls = 0
self.fn = fn
def __call__(self, x):
self.n_calls += 1
return self.fn(x)
return F(fn)
Use the callback argument to pass a function that will increment a global integer.

Passing some values as variables

I'm a physics graduate student with some basic knowledge of Python and I'm facing some problems that challenge my abilities.
I'm trying to pass some variables as dummies and some not. I have a function that receives a function as the first argument, but I need that some values to be declared "a posteriori".
What I'm trying to mean is the following:
lead0 = add_leads(lead_shape_horizontal(W, n), (0, 0, n), sym0)
The function "add_leads" takes some function as well as a tuple and a third argument which is fine. But n hasn't any definition yet. I want that n has an actual sense when it enters "add_leads".
Here is the actual function add_leads
def add_leads(shape, origin_2D, symm):
lead_return = []
lead_return_reversed = []
for m in range(L):
n = N_MIN + m
origin_3D = list(origin_2D)+[n]
lead_return.append(kwant.Builder(symm))
lead_return[m][red.shape(shape(n), tuple(origin_3D))] = ONN + HBAR*OMEGA*n
lead_return[m][[kwant.builder.HoppingKind(*hopping) for
hopping in hoppings_leads]] = HOPP
lead_return[m].eradicate_dangling()
Note that n is defined under for, so, I wish to put the value of n in shape(n) (in this case leads_shape_horizontal with a fixed value for W, not for n).
I need this to be this way because eventually the function which is the argument for lead_shape might have more than 2 input values but still just need to vary n
Can I achieve this in Python? If I can, How to do so?
Help will be really appreciated.
Sorry for my english!
Thanks in advance
You probably should pass in the function lead_shape_horizontal, not the function with argument lead_shape_horizontal(W, n)
Because the latter one will return the result of the function, not function object itself. Unless the return value is also a function, you'll get an error when you later call shape(n), which is identical to lead_shape_horizontal(W, n)(n)
As for providing a fix value for W but not for n, you can either give W a default value in the function or just don't make it an argument
For example,
def lead_shape_horizontal(n, W=some_value):
# do stuff
or If you always fix W, then it doesn't have to be an argument
def lead_shape_horizontal(n):
W = some_value
# do stuff
Also note that you didn't define n when calling function, so you can't pass in n to the add_leads function.
Maybe you have to construct the origin_2D inside the function
like origin_2D = origin_2D + (n,)
Then you can call the function like this lead0 = add_leads(lead_shape_horizontal, (0, 0), sym0)
See Python Document to understand how default value works.
Some advice: Watch out the order of arguments when you're using default value.
Also watch out when you're passing in mutable object as default value. This is a common gotcha

Calling the result of a function in another function

The code is very long so I won't type it in.
What I am confused about as a beginner programmer, is function calling. So I had a csv file that the function divided all the content (they were integers) by 95 to get the normalised scores.
I finished the function by returning the result. its called return sudentp_file
Now I want to continue this new variable into another function.
So this new function will get the average of the studentp_file. So I made a new function. Ill add the other function as a template of what im doing.
def normalise(student_file, units_file)
~ Do stuff here ~
return studentp_file
def mean(studentp_file):
mean()
What I get confused about is what to put in the mean(). Do I keep it or remove it? I understand you guys don't know the file I'm working with my a little basic understanding of how functions and function calling works would be appreciated. Thanks.
When you call your function you need to pass in the parameters it needs (based on what you specified in your def statement. So you code might look like this:
def normalise(student_file, units_file)
~ Do stuff here ~
return studentp_file
def mean(studentp_file):
~ other stuff here ~
return mean
# main code starts here
# get student file and units file from somewhere, I'll call them files A and B. Get the resulting studentp file back from the function call and store it in variable C.
C = normalize(A, B)
# now call the mean function using the file we got back from normalize and capture the result in variable my_mean
my_mean = mean(C)
print(my_mean)
i assume that normalise function is executed prior to mean function? if so try out this structure:
def normalise(student_file, units_file):
#do stuff here
return studentp_file
def mean(studentp_file):
#do stuff here
sp_file = normalise(student_file, units_file)
mean(sp_file)
functions in python(2/3) are made for reusability and to keep your code organized in a block. these functions may or may not return a value, based on arguments you pass (if it accepts arguments). think of it as if functions are like real life factories making finished products. raw goods are fed into factories, so that they produce a finished product. functions are also like that. :)
now, notice that i assigned a variable called sp_file with the value of the function call normalise(...). this function call - accepted parameters (student_file, units_file) - which are your 'raw' goods to be fed towards your function normalise.
return - basically returns whatever value towards the point in your code which called your function. in this case return, returns the value of studentp_file back to sp_file. sp_file would then get studentp_file's value and can be then passed to mean() function.
/ogs
Well, it's unclear buy why not just (dummy example):
def f(a,b):
return f2(3)+a+b
def f2(c):
return c+1
Call the f2 in f and do return in f2
If the results from function one will always be called to function two you could do this.
def f_one(x, y):
return (f_two(x, y))
def f_two(x, y):
return x + y
print(f_one(1, 1))
2
Or just a thought... You could set up a variable z that works as a switch, if its 1 it passes the result to function to the next function , or if 2 returns result of function one
def f_one(x, y, z):
result = x + y
if z == 1:
return (f_two(result))
elif z == 2:
return result
def f_two(x):
return x - 1
a = f_one(1, 1, 1)
print(a)
b = f_one(1, 1, 2)
print(b)

Using the methods of scipy's rv_continuous when creating a cutom continuous distribution

I am trying to calculate E[f(x)] for some pdf that I generate/estimated from data.
It says in the documentation:
Subclassing
New random variables can be defined by subclassing rv_continuous class
and re-defining at least the _pdf or the _cdf method (normalized to
location 0 and scale 1) which will be given clean arguments (in
between a and b) and passing the argument check method.
If positive argument checking is not correct for your RV then you will
also need to re-define the _argcheck method.
So I subclassed and defined _pdf but whenever I try call:
print my_continuous_rv.expect(lambda x: x)
scipy yells at me:
AttributeError: 'your_continuous_rv' object has no attribute 'a'
Which makes sense because I guess its trying to figure out the lower bound of the integral because it also print in the error:
lb = loc + self.a * scale
I tried defining the attribute self.a and self.b as (which I believe are the limits/interval of where the rv is defined):
self.a = float("-inf")
self.b = float("inf")
However, when I do that then it complains and says:
if N > self.numargs:
AttributeError: 'your_continuous_rv' object has no attribute 'numargs'
I was not really sure what numargs was suppose to be but after checking scipy's code on github it looks there is this line of code:
if not hasattr(self, 'numargs'):
# allows more general subclassing with *args
self.numargs = len(shapes)
Which I assume is the shape of the random variable my function was suppose to take.
Currently I am only doing a very simple random variable with a single float as a possible value for it. So I decided to hard code numargs to be 1. But that just lead down the road to more yelling from scipy's part.
Thus, what it boils down is that I think from the documentation its not clear to me what I have to do when I subclass it, because I did what they said, to overwrite _pdf but after doing that it asks me for self.a, which I hardcoded and then it asks me for numargs, and at this point I think I am concluding I don't really know how they want me to subclass, rv_continuous. Does some one know? I have can generate the pdf I want from the data I want to fit and then just be able to get expected values and things like that from the pdf, what else do I have to initialize in rv_continous so that it actually works?
For historical reasons, scipy distributions are instances, so that you need to have an instance of your subclass. For example:
>>> class MyRV(stats.rv_continuous):
... def _pdf(self, x, k):
... return k * np.exp(-k*x)
>>> my_rv = MyRV(name='exp', a=0.) # instantiation
Notice the need to specify the limits of the support: default values are a=-inf and b=inf.
>>> my_rv.a, my_rv.b
(0.0, inf)
>>> my_rv.numargs # gets figured out automagically
1
Once you've specified, say, _pdf, you have a working distribution instance:
>>> my_rv.cdf(4, k=3)
0.99999385578764677
>>> my_rv.rvs(k=3, size=4)
array([ 0.37696127, 1.10192779, 0.02632473, 0.25516446])
>>> my_rv.expect(lambda x: 1, args=(2,)) # k=2 here
0.9999999999999999
SciPy's rv_histogram method allows you to provide data and it provides the pdf, cdf and random generation methods.

adapt the method with a dynamic number of parameters

I'm using sage to print diffrent graphs with a script written in python. I'm trying to write a generic code that allows me to print all the graphs. For example I have :
g1 = graphs.BarbellGraph(9, 4)
g2 = graphs.RandomNewmanWattsStrogatz(12, 2, .3)
The graph depends on the number and type of my parameters and I must adapt my code to make it work with diffrent cases.
My code :
registry = {"graphs": graphs, "digraphs":digraphs}
methodtocall = getattr(registry["graphs"], "BarbellGraph")
result = methodtocall(2,3)
print(result)
with this code I get as a result
graphs.BarbellGraph(2, 3)
my problem is that methodtocall accepts 2 parameters in the code above and I want to change it depending on the number of parameters for the chosen graph.
How can I change the code to make it dynamic for the parameters ?
if I have N parameters I want to have
result = methodtocall(param1, ... ,paramN)
thanks in advance
I think you are looking for the star-operator (aka "splat" or "unpacking" operator):
result = methodtocall(*[param1, ... ,paramN])
If you put the arguments in a list, you can call a function as follows;
graphs.RandomNewmanWattsStrogatz(*parameter_list)
Which will expand the list as position arguments.
If you are writing a function which needs to take position arguments you can accept arbitrary numbers of arguments in a similar manner;
def my_function(*args):
assert(type(args) == tuple)

Categories