Python Declare Variables in Function Call? - python

Sorry if this has already been asked, I'm not sure exactly how to describe my question in a sentence anyway. I'm doing some work for my bioinformatics course, and I've begun to see variables declared when functions are called, instead of just passing the argument (see line 5 below).
def sobelFilter(pixmap):
matrix = array([[-1, 0, 1],
[-2, 0, 2],
[-1, 0, 1]])
grey = pixmap.mean(axis=2) #<---here
edgeX = convolveMatrix2D(grey, matrix)
edgeY = convolveMatrix2D(grey, matrix.T)
pixmap2 = sqrt(edgeX * edgeX + edgeY * edgeY)
normalisePixmap(pixmap2)
return pixmap2
What is the purpose of grey = pixmap.mean(axis=2) when axis is never used again? Why not just say grey = pixmap.mean(2)?
If it's necessary for my question, this is just code we were given to use, not written by myself. Pixmap refers to this code:
def imageToPixmapRGB(img):
img2 = img.convert("RGB")
w,h = img2.size
data = img2.getdata()
pixmap = array(data, float)
pixmap = pixmap.reshape((h,w,3))
return pixmap

There is not a local variable called axis being initialized, you are just explicitly stating which parameter of the function you are passing.
You might use this if there are multiple optional parameters in a function:
Example:
In [9]: def foo(x=2, y=2):
...: print(x, y)
...:
In [10]: foo(y=3)
2 3
If I want to change y, but leave x as default, I must specify when I call the function.
Even when you aren't setting a particular optional parameter however, there is nothing wrong with passing an argument by its keyword.

pixmap is a numpy array with dimension >= 2. You can test this by printing pixmap.ndim.
The line you refer to is calculating the mean of an array along an axis. In other words, axis is an argument of the mean method. See numpy.mean documentation for more details.
You are right in that the axis parameter may be omitted, i.e. arr.mean(2) is equivalent to arr.mean(axis=2). Most likely it was explicitly stated for clarity.

This looks a bit like the variable assignment syntax because of the =, but it is actually a very different feature:
When you call functions, you can specify which variables to set to what using the "keyword argument" syntax. This allows you to set specific parameters by name:
some_func(arg1, arg2, arg5=None)
This is very useful with default arguments:
def x(a, b, c=None, d=None):
pass
If you wanted to set d with the normal syntax, you would have to specify an argument for c too.
x(1, 2, None, "hi")
Using the keyword argument syntax, you can omit this:
x(1, 2, d="hi")
Keyword arguments are often, like in this case, also used to make the purpose of a parameter clear to the reader. If you see:
pixmap.mean(2)
It is not immediately clear what the 2 meas. However, if you see
pixmap.mean(axis=2)
you immediately know what 2 refers to and why it's there
Also see this section of the python docs

Related

Passing some values as variables

I'm a physics graduate student with some basic knowledge of Python and I'm facing some problems that challenge my abilities.
I'm trying to pass some variables as dummies and some not. I have a function that receives a function as the first argument, but I need that some values to be declared "a posteriori".
What I'm trying to mean is the following:
lead0 = add_leads(lead_shape_horizontal(W, n), (0, 0, n), sym0)
The function "add_leads" takes some function as well as a tuple and a third argument which is fine. But n hasn't any definition yet. I want that n has an actual sense when it enters "add_leads".
Here is the actual function add_leads
def add_leads(shape, origin_2D, symm):
lead_return = []
lead_return_reversed = []
for m in range(L):
n = N_MIN + m
origin_3D = list(origin_2D)+[n]
lead_return.append(kwant.Builder(symm))
lead_return[m][red.shape(shape(n), tuple(origin_3D))] = ONN + HBAR*OMEGA*n
lead_return[m][[kwant.builder.HoppingKind(*hopping) for
hopping in hoppings_leads]] = HOPP
lead_return[m].eradicate_dangling()
Note that n is defined under for, so, I wish to put the value of n in shape(n) (in this case leads_shape_horizontal with a fixed value for W, not for n).
I need this to be this way because eventually the function which is the argument for lead_shape might have more than 2 input values but still just need to vary n
Can I achieve this in Python? If I can, How to do so?
Help will be really appreciated.
Sorry for my english!
Thanks in advance
You probably should pass in the function lead_shape_horizontal, not the function with argument lead_shape_horizontal(W, n)
Because the latter one will return the result of the function, not function object itself. Unless the return value is also a function, you'll get an error when you later call shape(n), which is identical to lead_shape_horizontal(W, n)(n)
As for providing a fix value for W but not for n, you can either give W a default value in the function or just don't make it an argument
For example,
def lead_shape_horizontal(n, W=some_value):
# do stuff
or If you always fix W, then it doesn't have to be an argument
def lead_shape_horizontal(n):
W = some_value
# do stuff
Also note that you didn't define n when calling function, so you can't pass in n to the add_leads function.
Maybe you have to construct the origin_2D inside the function
like origin_2D = origin_2D + (n,)
Then you can call the function like this lead0 = add_leads(lead_shape_horizontal, (0, 0), sym0)
See Python Document to understand how default value works.
Some advice: Watch out the order of arguments when you're using default value.
Also watch out when you're passing in mutable object as default value. This is a common gotcha

How to change a default parameter programatically? [duplicate]

In Python, is it possible to redefine the default parameters of a function at runtime?
I defined a function with 3 parameters here:
def multiplyNumbers(x,y,z):
return x*y*z
print(multiplyNumbers(x=2,y=3,z=3))
Next, I tried (unsuccessfully) to set the default parameter value for y, and then I tried calling the function without the parameter y:
multiplyNumbers.y = 2;
print(multiplyNumbers(x=3, z=3))
But the following error was produced, since the default value of y was not set correctly:
TypeError: multiplyNumbers() missing 1 required positional argument: 'y'
Is it possible to redefine the default parameters of a function at runtime, as I'm attempting to do here?
Just use functools.partial
multiplyNumbers = functools.partial(multiplyNumbers, y = 42)
One problem here: you will not be able to call it as multiplyNumbers(5, 7, 9); you should manually say y=7
If you need to remove default arguments I see two ways:
Store original function somewhere
oldF = f
f = functools.partial(f, y = 42)
//work with changed f
f = oldF //restore
use partial.func
f = f.func //go to previous version.
Technically, it is possible to do what you ask… but it's not a good idea. RiaD's answer is the Pythonic way to do this.
In Python 3:
>>> def f(x=1, y=2, z=3):
... print(x, y, z)
>>> f()
1 2 3
>>> f.__defaults__ = (4, 5, 6)
4 5 6
As with everything else that's under the covers and hard to find in the docs, the inspect module chart is the best place to look for function attributes.
The details are slightly different in Python 2, but the idea is the same. (Just change the pulldown at the top left of the docs page from 3.3 to 2.7.)
If you're wondering how Python knows which defaults go with which arguments when it's just got a tuple… it just counts backward from the end (or the first of *, *args, **kwargs—anything after that goes into the __kwdefaults__ dict instead). f.__defaults = (4, 5) will set the defaults to y and z to 4 and 5, and with default for x. That works because you can't have non-defaulted parameters after defaulted parameters.
There are some cases where this won't work, but even then, you can immutably copy it to a new function with different defaults:
>>> f2 = types.FunctionType(f.__code__, f.__globals__, f.__name__,
... (4, 5, 6), f.__closure__)
Here, the types module documentation doesn't really explain anything, but help(types.FunctionType) in the interactive interpreter shows the params you need.
The only case you can't handle is a builtin function. But they generally don't have actual defaults anyway; instead, they fake something similar in the C API.
yes, you can accomplish this by modifying the function's func.__defaults__ tuple
that attribute is a tuple of the default values for each argument of the function.
for example, to make pandas.read_csv always use sep='\t', you could do:
import inspect
import pandas as pd
default_args = inspect.getfullargspec(pd.read_csv).args
default_arg_values = list(pd.read_csv.__defaults__)
default_arg_values[default_args.index("sep")] = '\t'
pd.read_csv.__defaults__ = tuple(default_arg_values)
use func_defaults as in
def myfun(a=3):
return a
myfun.func_defaults = (4,)
b = myfun()
assert b == 4
check the docs for func_defaults here
UPDATE: looking at RiaD's response I think I was too literal with mine. I don't know the context from where you're asking this question but in general (and following the Zen of Python) I believe working with partial applications is a better option than redefining a function's defaults arguments

list of functions with parameters

I need to obtain a list of functions, where my function is defined as follows:
import theano.tensor as tt
def tilted_loss(y,f,q):
e = (y-f)
return q*tt.sum(e)-tt.sum(e[e<0])
I attempted to do
qs = np.arange(0.05,1,0.05)
q_loss_f = [tilted_loss(q=q) for q in qs]
however, get the error TypeError: tilted_loss() missing 2 required positional arguments: 'y' and 'f'. I attempted the simpler a = tilted_loss(q=0.05) with the same result.
How do you go about creating this list of functions when parameters are required? Similar questions on SO consider the case where parameters are not involved.
You can use functools.partial:
q_loss_f = [functools.partial(tilted_loss, q=q) for q in qs]
There are 2 ways you can solve this problem. Both ways require you know the default values for y and f.
With the current function, there's simply no way for the Python interpreter to know the value of y and f when you call tilted_loss(q=0.05). y and f are simply undefined & unknown.
Solution (1): Add default values
We can fix this by adding default values for the function, for example, if default values are: y = 0, f = 1:
def tilted_loss(q, y=0, f=1):
# original code goes here
Note that arguments with default values have to come AFTER non-default arguments (i.e q).
Solution (2): Specify default values during function call
Alternatively, just specify the default values every time you call that function. (Solution 1 is better)

How to correctly call function with optional parameters in python

I'm a beginner with python and I'm facing a problem with a function that requires optional parameters.
This function gets as parameters a variable number of file paths, that can be from 2 to n parameters.
After that, a certain number of optional parameters can be passed to this function.
I tried to do something like that:
def compareNfilesParameters(*args):
start_time = time.time()
listFiles = []
listParameters = []
for argument in args:
if str(argument).endswith(".vcf"):
listFiles.append(str(argument))
else:
listParameters.append(argument)
So if the parameters has the file extension it is considered as one of the file path parameters, the others are seen as the optional parameters.
What I want to do is letting the user call the function like:
function('a.vcf', 'b.vcf', 'c.vcf')
or
function('a.vcf', 'b.vcf', 'c.vcf', 0, 1)
or
function('a.vcf', 'b.vcf', 'c.vcf', 0, 1, 4,...,3)
I tried different approaches but none of them satisfies me.
The first approach is declaring the function as:
def compareNfilesParameters(*args)
but this way, if I get for example 3 parameters, 2 will certainly be the files path, and the last one I don't know on which variable it refers. So I need to specify every value and pass '-1' for the parameters that I want to use default value.
The 2nd approach is the following:
def compareNfilesParameters(*args, par1 = 10, par2 = 15 ..)
But this way I need to call the function like:
compareNfilesParameters(path1, path2, path3, par1 = 10)
and not like
compareNfilesParameters(path1, path2, path3, 10)
or the 10 will be considered in the args input, right? I wouldn't like to use this approach because it becomes very verbose to call the function.
How would you do this?
Make the user pass in the filenames as a sequence; don't try to cram everything into separate arguments:
def compareNfilesParameters(files, *params):
and call this as:
compareNfilesParameters(('a.vcf', 'b.vcf', 'c.vcf'), 0, 1, 4)
This makes the files explicit and removes the need to separate files from other parameters.
If your remaining parameters are distinct options (and not a homogenous series of integers), I'd use keyword arguments:
def compareNfilesParameters(files, op1=default_value, op2=default_value, op3=default_value):
You don't have to use keyword arguments with keywords when calling; you can still treat them as positional:
compareNfilesParameters(('a.vcf', 'b.vcf', 'c.vcf'), 0, 1, 4)
would give op1 the value 0, op2 the value 1, and op3 the value 4. Only if you want to specify values out of order or for a specific option do you have to use keyword arguments in the call:
compareNfilesParameters(('a.vcf', 'b.vcf', 'c.vcf'), op3=4)
Ok, I solved like using the keyword parameters as suggested.
def compareNfilesParameters(listFiles, **kwargs):
start_time = time.time()
if len(listFiles) < MINUMUM_FILES_NUMBER :
print "You need to specify at least "+ str(MINUMUM_FILES_NUMBER) +" files."
return
try:
operationType = int(kwargs.get("op", DEFAULT_OPERATION_TYPE))
except ValueError:
print "Operation type filter has to be an integer."
return
if operationType not in [0,1]:
print "Operation type must be 0 (intersection), 1 (union)"
return
and so on for all the parameters.
Like this I need to put all the files paths in a list and pass it as a single required parameter, and searching kwargs dictionary for optionals parameters setting the default values if not expressed.

Using the methods of scipy's rv_continuous when creating a cutom continuous distribution

I am trying to calculate E[f(x)] for some pdf that I generate/estimated from data.
It says in the documentation:
Subclassing
New random variables can be defined by subclassing rv_continuous class
and re-defining at least the _pdf or the _cdf method (normalized to
location 0 and scale 1) which will be given clean arguments (in
between a and b) and passing the argument check method.
If positive argument checking is not correct for your RV then you will
also need to re-define the _argcheck method.
So I subclassed and defined _pdf but whenever I try call:
print my_continuous_rv.expect(lambda x: x)
scipy yells at me:
AttributeError: 'your_continuous_rv' object has no attribute 'a'
Which makes sense because I guess its trying to figure out the lower bound of the integral because it also print in the error:
lb = loc + self.a * scale
I tried defining the attribute self.a and self.b as (which I believe are the limits/interval of where the rv is defined):
self.a = float("-inf")
self.b = float("inf")
However, when I do that then it complains and says:
if N > self.numargs:
AttributeError: 'your_continuous_rv' object has no attribute 'numargs'
I was not really sure what numargs was suppose to be but after checking scipy's code on github it looks there is this line of code:
if not hasattr(self, 'numargs'):
# allows more general subclassing with *args
self.numargs = len(shapes)
Which I assume is the shape of the random variable my function was suppose to take.
Currently I am only doing a very simple random variable with a single float as a possible value for it. So I decided to hard code numargs to be 1. But that just lead down the road to more yelling from scipy's part.
Thus, what it boils down is that I think from the documentation its not clear to me what I have to do when I subclass it, because I did what they said, to overwrite _pdf but after doing that it asks me for self.a, which I hardcoded and then it asks me for numargs, and at this point I think I am concluding I don't really know how they want me to subclass, rv_continuous. Does some one know? I have can generate the pdf I want from the data I want to fit and then just be able to get expected values and things like that from the pdf, what else do I have to initialize in rv_continous so that it actually works?
For historical reasons, scipy distributions are instances, so that you need to have an instance of your subclass. For example:
>>> class MyRV(stats.rv_continuous):
... def _pdf(self, x, k):
... return k * np.exp(-k*x)
>>> my_rv = MyRV(name='exp', a=0.) # instantiation
Notice the need to specify the limits of the support: default values are a=-inf and b=inf.
>>> my_rv.a, my_rv.b
(0.0, inf)
>>> my_rv.numargs # gets figured out automagically
1
Once you've specified, say, _pdf, you have a working distribution instance:
>>> my_rv.cdf(4, k=3)
0.99999385578764677
>>> my_rv.rvs(k=3, size=4)
array([ 0.37696127, 1.10192779, 0.02632473, 0.25516446])
>>> my_rv.expect(lambda x: 1, args=(2,)) # k=2 here
0.9999999999999999
SciPy's rv_histogram method allows you to provide data and it provides the pdf, cdf and random generation methods.

Categories