Defining default keyword values to function in a dictionary in python - python

I am writing a function that takes a lot of keywords.
I have a dictionary which is very lengthy that contains many of these keywords that already exists in my code and is being used elsewhere. E.g.
{'setting1':None, 'setting2': None....}
I am wondering is there a way, when I define my function, for me to set all of these as keywords, rather than having to type them out again like this:
def my_function(setting1=None, setting2=None, **kwargs)
To be clear, essentially I want to set all of the contents of the dictionary to be keywords with default value None, and when I call the function I should be able to change their values. So I am not looking to provide the dictionary as kwargs upon calling the function.

While not exactly the same, I ususally prefer to save the arguments in **kwargs and use .get() to get the value or None:
def my_function(**kwargs):
do_something(kwargs.get("alpha"), kwargs.get("beta"))
.get() on a dictionary returns the value if a key exists, or None of it does not. You can optionally specify a different default value as a second argument if you like.

When creating a function, you will need to implement how your arguments are used. By automatically creating arguments you end up adding arguments and forgetting to implement a behaviour for them.
# Manually defined.
def func(a, b, c, d):
return a + b / c * d
# Auto-defined - Human error.
def func(""" auto define a,b,c,d,e,f,g,h """):
return a + b / c * d # <- you only use half of the arguments. Confusing at best.
# Auto-defined - Inputs unclear, code is not explicit.
def func(defind_my_args):
return a + b / c * d
If you need to reuse "code behaviour" to the point that you can "inherit" parameters, maybe you should be using an object instead.

Related

in python, is it possible to pass a parameter value which will behave like "missing" or "default"?

let's say I have a 3rd party function (where I don't control the code)
def f(a=1):
...
assuming I don't know the default value (or I don't want to rely on it not changing in future versions), is there a way to call the function with some value of a, which will be equivalent to calling f()
the use case is that the values should be passed from an external function, and I want to avoid boilerplate like:
if a is not None:
return f(a)
else
return f()
EDIT
clarification:
I want to do something like:
a = DEFAULT
for which f(a) will be equivalent to calling f()
You dont need to use any thing, if you will supply the value default will be overriden.
example code:
def f(a=10):
print(a)
f() //it will print 10 the default value
f(20) //will print 20, the supplied value
or if your case is to set default value in your code, then you can use partial function
from functools import partial
new_f = partial(f, 20)
new_f() // it will send the default value u set above and print 20

How can I convert object attributes in mutable in Python? [duplicate]

How can I pass an integer by reference in Python?
I want to modify the value of a variable that I am passing to the function. I have read that everything in Python is pass by value, but there has to be an easy trick. For example, in Java you could pass the reference types of Integer, Long, etc.
How can I pass an integer into a function by reference?
What are the best practices?
It doesn't quite work that way in Python. Python passes references to objects. Inside your function you have an object -- You're free to mutate that object (if possible). However, integers are immutable. One workaround is to pass the integer in a container which can be mutated:
def change(x):
x[0] = 3
x = [1]
change(x)
print x
This is ugly/clumsy at best, but you're not going to do any better in Python. The reason is because in Python, assignment (=) takes whatever object is the result of the right hand side and binds it to whatever is on the left hand side *(or passes it to the appropriate function).
Understanding this, we can see why there is no way to change the value of an immutable object inside a function -- you can't change any of its attributes because it's immutable, and you can't just assign the "variable" a new value because then you're actually creating a new object (which is distinct from the old one) and giving it the name that the old object had in the local namespace.
Usually the workaround is to simply return the object that you want:
def multiply_by_2(x):
return 2*x
x = 1
x = multiply_by_2(x)
*In the first example case above, 3 actually gets passed to x.__setitem__.
Most cases where you would need to pass by reference are where you need to return more than one value back to the caller. A "best practice" is to use multiple return values, which is much easier to do in Python than in languages like Java.
Here's a simple example:
def RectToPolar(x, y):
r = (x ** 2 + y ** 2) ** 0.5
theta = math.atan2(y, x)
return r, theta # return 2 things at once
r, theta = RectToPolar(3, 4) # assign 2 things at once
Not exactly passing a value directly, but using it as if it was passed.
x = 7
def my_method():
nonlocal x
x += 1
my_method()
print(x) # 8
Caveats:
nonlocal was introduced in python 3
If the enclosing scope is the global one, use global instead of nonlocal.
Maybe it's not pythonic way, but you can do this
import ctypes
def incr(a):
a += 1
x = ctypes.c_int(1) # create c-var
incr(ctypes.ctypes.byref(x)) # passing by ref
Really, the best practice is to step back and ask whether you really need to do this. Why do you want to modify the value of a variable that you're passing in to the function?
If you need to do it for a quick hack, the quickest way is to pass a list holding the integer, and stick a [0] around every use of it, as mgilson's answer demonstrates.
If you need to do it for something more significant, write a class that has an int as an attribute, so you can just set it. Of course this forces you to come up with a good name for the class, and for the attribute—if you can't think of anything, go back and read the sentence again a few times, and then use the list.
More generally, if you're trying to port some Java idiom directly to Python, you're doing it wrong. Even when there is something directly corresponding (as with static/#staticmethod), you still don't want to use it in most Python programs just because you'd use it in Java.
Maybe slightly more self-documenting than the list-of-length-1 trick is the old empty type trick:
def inc_i(v):
v.i += 1
x = type('', (), {})()
x.i = 7
inc_i(x)
print(x.i)
A numpy single-element array is mutable and yet for most purposes, it can be evaluated as if it was a numerical python variable. Therefore, it's a more convenient by-reference number container than a single-element list.
import numpy as np
def triple_var_by_ref(x):
x[0]=x[0]*3
a=np.array([2])
triple_var_by_ref(a)
print(a+1)
output:
7
The correct answer, is to use a class and put the value inside the class, this lets you pass by reference exactly as you desire.
class Thing:
def __init__(self,a):
self.a = a
def dosomething(ref)
ref.a += 1
t = Thing(3)
dosomething(t)
print("T is now",t.a)
In Python, every value is a reference (a pointer to an object), just like non-primitives in Java. Also, like Java, Python only has pass by value. So, semantically, they are pretty much the same.
Since you mention Java in your question, I would like to see how you achieve what you want in Java. If you can show it in Java, I can show you how to do it exactly equivalently in Python.
class PassByReference:
def Change(self, var):
self.a = var
print(self.a)
s=PassByReference()
s.Change(5)
class Obj:
def __init__(self,a):
self.value = a
def sum(self, a):
self.value += a
a = Obj(1)
b = a
a.sum(1)
print(a.value, b.value)// 2 2
In Python, everything is passed by value, but if you want to modify some state, you can change the value of an integer inside a list or object that's passed to a method.
integers are immutable in python and once they are created we cannot change their value by using assignment operator to a variable we are making it to point to some other address not the previous address.
In python a function can return multiple values we can make use of it:
def swap(a,b):
return b,a
a,b=22,55
a,b=swap(a,b)
print(a,b)
To change the reference a variable is pointing to we can wrap immutable data types(int, long, float, complex, str, bytes, truple, frozenset) inside of mutable data types (bytearray, list, set, dict).
#var is an instance of dictionary type
def change(var,key,new_value):
var[key]=new_value
var =dict()
var['a']=33
change(var,'a',2625)
print(var['a'])

Passing some values as variables

I'm a physics graduate student with some basic knowledge of Python and I'm facing some problems that challenge my abilities.
I'm trying to pass some variables as dummies and some not. I have a function that receives a function as the first argument, but I need that some values to be declared "a posteriori".
What I'm trying to mean is the following:
lead0 = add_leads(lead_shape_horizontal(W, n), (0, 0, n), sym0)
The function "add_leads" takes some function as well as a tuple and a third argument which is fine. But n hasn't any definition yet. I want that n has an actual sense when it enters "add_leads".
Here is the actual function add_leads
def add_leads(shape, origin_2D, symm):
lead_return = []
lead_return_reversed = []
for m in range(L):
n = N_MIN + m
origin_3D = list(origin_2D)+[n]
lead_return.append(kwant.Builder(symm))
lead_return[m][red.shape(shape(n), tuple(origin_3D))] = ONN + HBAR*OMEGA*n
lead_return[m][[kwant.builder.HoppingKind(*hopping) for
hopping in hoppings_leads]] = HOPP
lead_return[m].eradicate_dangling()
Note that n is defined under for, so, I wish to put the value of n in shape(n) (in this case leads_shape_horizontal with a fixed value for W, not for n).
I need this to be this way because eventually the function which is the argument for lead_shape might have more than 2 input values but still just need to vary n
Can I achieve this in Python? If I can, How to do so?
Help will be really appreciated.
Sorry for my english!
Thanks in advance
You probably should pass in the function lead_shape_horizontal, not the function with argument lead_shape_horizontal(W, n)
Because the latter one will return the result of the function, not function object itself. Unless the return value is also a function, you'll get an error when you later call shape(n), which is identical to lead_shape_horizontal(W, n)(n)
As for providing a fix value for W but not for n, you can either give W a default value in the function or just don't make it an argument
For example,
def lead_shape_horizontal(n, W=some_value):
# do stuff
or If you always fix W, then it doesn't have to be an argument
def lead_shape_horizontal(n):
W = some_value
# do stuff
Also note that you didn't define n when calling function, so you can't pass in n to the add_leads function.
Maybe you have to construct the origin_2D inside the function
like origin_2D = origin_2D + (n,)
Then you can call the function like this lead0 = add_leads(lead_shape_horizontal, (0, 0), sym0)
See Python Document to understand how default value works.
Some advice: Watch out the order of arguments when you're using default value.
Also watch out when you're passing in mutable object as default value. This is a common gotcha

Using string as literal expression in function argument in Python

Let's say I have a function that can take various kinds of parameter values, but I don't want to (as a constraint) pass arguments explicitly. Instead, I want to pass them as a string.:
def func(param)
return param+param
a = 'param=4'
func(<do something to a>(a))
>>8
Is this possible in python?
I want to use this idea in Django to create Query filters based on GET parameters in a dictionary and then just chain them using their keys.
lookup_dic = {'user': 'author=user',
'draft': 'Q(publish_date_lte=timezone.now())|
Q(publish_date_isnull=True)'}
Based on whether the user and draft keywords are passed in the GET parameters, this would be read out like:
queryset.objects.filter(author=user).filter(Q(publish_date_lte=timezone.now())|
Q(publish_date_isnull=True))
I understand that I can do this by replacing the author=user by Q(author__name=user), but I wanted to know if this string comprehension feature is implemented in python in general?
Use eval
def func(param=0):
return param+param
a = 'param=4'
eval('func(' + a +')')
Are you looking for this?
def func(param):
return param + param
a = 'param=4'
parameter, value = a.split("=")
print(func(**{parameter: int(value)}))
# >> 8

Data structure of memoization in db

What is the best data structure to cache (save/store/memorize) so many function result in database.
Suppose function calc_regress with flowing definition in python:
def calc_regress(ind_key, dep_key, count=30):
independent_list = sql_select_recent_values(count, ind_key)
dependant_list = sql_select_recent_values(count, dep_key)
import scipy.stats as st
return st.linregress(independent_list, dependant_list)
I see answers to What kind of table structure should be used to store memoized function parameters and results in a relational database? but it seem to resolve problem of just one function while I have about 500 function.
Option A
You could use the structure in the linked answer, un-normalized with the number of columns = max number of arguments among the 500 functions. Also need to add a column for the function name.
Then you could do a SELECT * FROM expensive_func_results WHERE func_name = 'calc_regress' AND arg1 = ind_key AND arg2 = dep_key and arg3 = count, etc.
Ofcourse, that's not a very good design to use. For the same function called with fewer parameters, columns with null values/non-matches need to be ignored; otherwise you'll get multiple result rows.
Option B
Create the table/structure as func_name, arguments, result where 'arguments' is always a kwargs dictionary or positional args but not mixed per entry. Even with the kwargs dict stored as a string, order of keys->values in it is not predictable/consistent even if it's the same args. So you'll need to order it before converting to a string and storing it. When you want to query, you'll use SELECT * FROM expensive_func_results WHERE func_name = 'calc_regress' AND arguments = 'str(kwargs_dict)', where str(kwargs_dict) is something you'll set programmatically. It could also be set to the result of inspect.getargspec, (or inspect.getcallargs) though you'll have to check for consistency.
You won't be able to do queries on the argument combos unless you provide all the arguments to the query or partial match with LIKE.
Option C
Normalised all the way: One table func_calls as func_name, args_combo_id, arg_name_idx, arg_value. Each row of the table will store one arg for one combo of that function's calling args. Another table func_results as func_name, args_combo_id, result. You could also normalise further for func_name to be mapped to a func_id.
In this one, the order of keyword args doesn't matter since you'll be doing an Inner join to select each parameter. This query will have to be built programmatically or done via a stored procedure, since the number of joins required to fetch all the parameters is determined by the number of parameters. Your function above has 3 params but you may have another with 10. arg_name_idx is 'argument name or index' so it also works for mixed kwargs + args. Some duplication may occur in cases like calc_regress(ind_key=1, dep_key=2, count=30) and calc_regress(1, 2, 30) (as well as calc_regress(1, 2) with a default value for count <-- this cases should be avoided, the table entry should have all args); since the args_combo_id will be different for both but result will obviously be the same. Again, the inspect module may help in this area.
[Edit] PS: Additionally, for the func_name, you may need to use a fully qualified name to avoid conflicts across modules in your package. And decorators may interfere with that as well; without a deco.__name__ = func.__name__, etc.
PPS: If objects are being passed to functions being memoized in the db, make sure that their __str__ is something useful & repeatable/consistent to store as arg values.
This particular case doesn't require you to re-create objects from the arg values in the db, otherwise, you'd need to make __str__ or __repr__ like the way __repr__ was intended to be (but isn't generally done):
this should look like a valid Python expression that could be used to recreate an object with the same value (given an appropriate environment).
I'd use a key value storage here, where the key could be a concatenation of the id of the function object (to guarantee the key uniqness) and its arguments while the value would be the function returned value.
So calc_regress(1, 5, 30) call would produce an example key 139694472779248_1_5_30 where the first part is id(calc_regress). An example key producing function:
>>> def produce_cache_key(fun, *args, **kwargs):
... args_key = '_'.join(str(a) for a in args)
... kwargs_key = '_'.join('%s%s' % (k, v) for k, v in kwargs.items())
... return '%s_%s_%s' % (id(fun), args_key, kwargs_key)
You could keep your results in memory using a dictionary and a decorator:
>>> def cache_result(cache):
... def decorator(fun):
... def wrapper(*args, **kwargs):
... key = produce_cache_key(fun, *args, **kwargs)
... if key not in cache:
... cache[key] = fun(*args, **kwargs)
... return cache[key]
... return wrapper
... return decorator
...
>>>
>>> #cache_result(cache_dict)
... def fx(x, y, z=0):
... print 'Doing some expensive job...'
...
>>> cache = {}
>>> fx(1, 2, z=1)
Doing some expensive job...
>>> fx(1, 2, z=1)
>>>

Categories