Related
Let's imagine I have a dict :
d = {'a': 3, 'b':4}
I want to create a function f that does the exact same thing than this function :
def f(x, a=d['a'], b=d['b']):
print(x, a, b)
(Not necessarily print, but do some stuff with the variable and calling directly from their name).
But I would like to create this function directly from the dict, that is to say, I would like to have something that look likes
def f(x, **d=d):
print(x, a, b)
and that behaves like the previously defined function. The idea is that I have a large dictionary that contains defaults values for arguments of my function, and I would like not to have to do
def f(a= d['a'], b = d['b'] ...)
I don't know if it's possible at all in python. Any insight is appreciated !
Edit : The idea is to be able to call f(5, a=3).
Edit2 : The question is not about passing arguments stored in a dict to a function but to define a function whose arguments names and defaults values are stored in a dict.
You cannot achieve this at function definition because Python determines the scope of a function statically. Although, it is possible to write a decorator to add in default keyword arguments.
from functools import wraps
def kwargs_decorator(dict_kwargs):
def wrapper(f):
#wraps(f)
def inner_wrapper(*args, **kwargs):
new_kwargs = {**dict_kwargs, **kwargs}
return f(*args, **new_kwargs)
return inner_wrapper
return wrapper
Usage
#kwargs_decorator({'bar': 1})
def foo(**kwargs):
print(kwargs['bar'])
foo() # prints 1
Or alternatively if you know the variable names but not their default values...
#kwargs_decorator({'bar': 1})
def foo(bar):
print(bar)
foo() # prints 1
Caveat
The above can be used, by example, to dynamically generate multiple functions with different default arguments. Although, if the parameters you want to pass are the same for every function, it would be simpler and more idiomatic to simply pass in a dict of parameters.
Python is designed such that the local variables of any function can be determined unambiguously by looking at the source code of the function. So your proposed syntax
def f(x, **d=d):
print(x, a, b)
is a nonstarter because there's nothing that indicates whether a and b are local to f or not; it depends on the runtime value of the dictionary, whose value could change across runs.
If you can resign yourself to explicitly listing the names of all of your parameters, you can automatically set their default values at runtime; this has already been well covered in other answers. Listing the parameter names is probably good documentation anyway.
If you really want to synthesize the whole parameter list at run time from the contents of d, you would have to build a string representation of the function definition and pass it to exec. This is how collections.namedtuple works, for example.
Variables in module and class scopes are looked up dynamically, so this is technically valid:
def f(x, **kwargs):
class C:
vars().update(kwargs) # don't do this, please
print(x, a, b)
But please don't do it except in an IOPCC entry.
try this:
# Store the default values in a dictionary
>>> defaults = {
... 'a': 1,
... 'b': 2,
... }
>>> def f(x, **kwa):
# Each time the function is called, merge the default values and the provided arguments
# For python >= 3.5:
args = {**defaults, **kwa}
# For python < 3.5:
# Each time the function is called, copy the default values
args = defaults.copy()
# Merge the provided arguments into the copied default values
args.update(kwa)
... print(args)
...
>>> f(1, f=2)
{'a': 1, 'b': 2, 'f': 2}
>>> f(1, f=2, b=8)
{'a': 1, 'b': 8, 'f': 2}
>>> f(5, a=3)
{'a': 3, 'b': 2}
Thanks Olvin Roght for pointing out how to nicely merge dictionaries in python >= 3.5
How about the **kwargs trick?
def function(arg0, **kwargs):
print("arg is", arg0, "a is", kwargs["a"], "b is", kwargs["b"])
d = {"a":1, "b":2}
function(0., **d)
outcome:
arg is 0.0 a is 1 b is 2
This question is very interesting, and it seemed different people have their different own guess about what the question really want.
I have my own too. Here is my code, which can express myself:
# python3 only
from collections import defaultdict
# only set once when function definition is executed
def kwdefault_decorator(default_dict):
def wrapper(f):
f.__kwdefaults__ = {}
f_code = f.__code__
po_arg_count = f_code.co_argcount
keys = f_code.co_varnames[po_arg_count : po_arg_count + f_code.co_kwonlyargcount]
for k in keys:
f.__kwdefaults__[k] = default_dict[k]
return f
return wrapper
default_dict = defaultdict(lambda: "default_value")
default_dict["a"] = "a"
default_dict["m"] = "m"
#kwdefault_decorator(default_dict)
def foo(x, *, a, b):
foo_local = "foo"
print(x, a, b, foo_local)
#kwdefault_decorator(default_dict)
def bar(x, *, m, n):
bar_local = "bar"
print(x, m, n, bar_local)
foo(1)
bar(1)
# only kw_arg permitted
foo(1, a=100, b=100)
bar(1, m=100, n=100)
output:
1 a default_value
1 m default_value
1 100 100
1 100 100
Posting this as an answer because it would be too long for a comment.
Be careful with this answer. If you try
#kwargs_decorator(a='a', b='b')
def f(x, a, b):
print(f'x = {x}')
print(f'a = {a}')
print(f'b = {b}')
f(1, 2)
it will issue an error:
TypeError: f() got multiple values for argument 'a'
because you are defining a as a positional argument (equal to 2).
I implemented a workaround, even though I'm not sure if this is the best solution:
def default_kwargs(**default):
from functools import wraps
def decorator(f):
#wraps(f)
def wrapper(*args, **kwargs):
from inspect import getfullargspec
f_args = getfullargspec(f)[0]
used_args = f_args[:len(args)]
final_kwargs = {
key: value
for key, value in {**default, **kwargs}.items()
if key not in used_args
}
return f(*args, **final_kwargs)
return wrapper
return decorator
In this solution, f_args is a list containing the names of all named positional arguments of f. Then used_args is the list of all parameters that have effectively been passed as positional arguments. Therefore final_kwargs is defined almost exactly like before, except that it checks if the argument (in the case above, a) was already passed as a positional argument.
For instance, this solution works beautifully with functions such as the following.
#default_kwargs(a='a', b='b', d='d')
def f(x, a, b, *args, c='c', d='not d', **kwargs):
print(f'x = {x}')
print(f'a = {a}')
print(f'b = {b}')
for idx, arg in enumerate(args):
print(f'arg{idx} = {arg}')
print(f'c = {c}')
for key, value in kwargs.items():
print(f'{key} = {value}')
f(1)
f(1, 2)
f(1, b=3)
f(1, 2, 3, 4)
f(1, 2, 3, 4, 5, c=6, g=7)
Note also that the default values passed in default_kwargs have higher precedence than the ones defined in f. For example, the default value for d in this case is actually 'd' (defined in default_kwargs), and not 'not d' (defined in f).
You can unpack values of dict:
from collections import OrderedDict
def f(x, a, b):
print(x, a, b)
d = OrderedDict({'a': 3, 'b':4})
f(10, *d.values())
UPD.
Yes, it's possible to implement this mad idea of modifying local scope by creating decorator which will return class with overriden __call__() and store your defaults in class scope, BUT IT'S MASSIVE OVERKILL.
Your problem is that you're trying to hide problems of your architecture behind those tricks. If you store your default values in dict, then access to them by key. If you want to use keywords - define class.
P.S. I still don't understand why this question collect so much upvotes.
Sure...
hope this helps
def funcc(x, **kwargs):
locals().update(kwargs)
print(x, a, b, c, d)
kwargs = {'a' : 1, 'b' : 2, 'c':1, 'd': 1}
x = 1
funcc(x, **kwargs)
Let's imagine I have a dict :
d = {'a': 3, 'b':4}
I want to create a function f that does the exact same thing than this function :
def f(x, a=d['a'], b=d['b']):
print(x, a, b)
(Not necessarily print, but do some stuff with the variable and calling directly from their name).
But I would like to create this function directly from the dict, that is to say, I would like to have something that look likes
def f(x, **d=d):
print(x, a, b)
and that behaves like the previously defined function. The idea is that I have a large dictionary that contains defaults values for arguments of my function, and I would like not to have to do
def f(a= d['a'], b = d['b'] ...)
I don't know if it's possible at all in python. Any insight is appreciated !
Edit : The idea is to be able to call f(5, a=3).
Edit2 : The question is not about passing arguments stored in a dict to a function but to define a function whose arguments names and defaults values are stored in a dict.
You cannot achieve this at function definition because Python determines the scope of a function statically. Although, it is possible to write a decorator to add in default keyword arguments.
from functools import wraps
def kwargs_decorator(dict_kwargs):
def wrapper(f):
#wraps(f)
def inner_wrapper(*args, **kwargs):
new_kwargs = {**dict_kwargs, **kwargs}
return f(*args, **new_kwargs)
return inner_wrapper
return wrapper
Usage
#kwargs_decorator({'bar': 1})
def foo(**kwargs):
print(kwargs['bar'])
foo() # prints 1
Or alternatively if you know the variable names but not their default values...
#kwargs_decorator({'bar': 1})
def foo(bar):
print(bar)
foo() # prints 1
Caveat
The above can be used, by example, to dynamically generate multiple functions with different default arguments. Although, if the parameters you want to pass are the same for every function, it would be simpler and more idiomatic to simply pass in a dict of parameters.
Python is designed such that the local variables of any function can be determined unambiguously by looking at the source code of the function. So your proposed syntax
def f(x, **d=d):
print(x, a, b)
is a nonstarter because there's nothing that indicates whether a and b are local to f or not; it depends on the runtime value of the dictionary, whose value could change across runs.
If you can resign yourself to explicitly listing the names of all of your parameters, you can automatically set their default values at runtime; this has already been well covered in other answers. Listing the parameter names is probably good documentation anyway.
If you really want to synthesize the whole parameter list at run time from the contents of d, you would have to build a string representation of the function definition and pass it to exec. This is how collections.namedtuple works, for example.
Variables in module and class scopes are looked up dynamically, so this is technically valid:
def f(x, **kwargs):
class C:
vars().update(kwargs) # don't do this, please
print(x, a, b)
But please don't do it except in an IOPCC entry.
try this:
# Store the default values in a dictionary
>>> defaults = {
... 'a': 1,
... 'b': 2,
... }
>>> def f(x, **kwa):
# Each time the function is called, merge the default values and the provided arguments
# For python >= 3.5:
args = {**defaults, **kwa}
# For python < 3.5:
# Each time the function is called, copy the default values
args = defaults.copy()
# Merge the provided arguments into the copied default values
args.update(kwa)
... print(args)
...
>>> f(1, f=2)
{'a': 1, 'b': 2, 'f': 2}
>>> f(1, f=2, b=8)
{'a': 1, 'b': 8, 'f': 2}
>>> f(5, a=3)
{'a': 3, 'b': 2}
Thanks Olvin Roght for pointing out how to nicely merge dictionaries in python >= 3.5
How about the **kwargs trick?
def function(arg0, **kwargs):
print("arg is", arg0, "a is", kwargs["a"], "b is", kwargs["b"])
d = {"a":1, "b":2}
function(0., **d)
outcome:
arg is 0.0 a is 1 b is 2
This question is very interesting, and it seemed different people have their different own guess about what the question really want.
I have my own too. Here is my code, which can express myself:
# python3 only
from collections import defaultdict
# only set once when function definition is executed
def kwdefault_decorator(default_dict):
def wrapper(f):
f.__kwdefaults__ = {}
f_code = f.__code__
po_arg_count = f_code.co_argcount
keys = f_code.co_varnames[po_arg_count : po_arg_count + f_code.co_kwonlyargcount]
for k in keys:
f.__kwdefaults__[k] = default_dict[k]
return f
return wrapper
default_dict = defaultdict(lambda: "default_value")
default_dict["a"] = "a"
default_dict["m"] = "m"
#kwdefault_decorator(default_dict)
def foo(x, *, a, b):
foo_local = "foo"
print(x, a, b, foo_local)
#kwdefault_decorator(default_dict)
def bar(x, *, m, n):
bar_local = "bar"
print(x, m, n, bar_local)
foo(1)
bar(1)
# only kw_arg permitted
foo(1, a=100, b=100)
bar(1, m=100, n=100)
output:
1 a default_value
1 m default_value
1 100 100
1 100 100
Posting this as an answer because it would be too long for a comment.
Be careful with this answer. If you try
#kwargs_decorator(a='a', b='b')
def f(x, a, b):
print(f'x = {x}')
print(f'a = {a}')
print(f'b = {b}')
f(1, 2)
it will issue an error:
TypeError: f() got multiple values for argument 'a'
because you are defining a as a positional argument (equal to 2).
I implemented a workaround, even though I'm not sure if this is the best solution:
def default_kwargs(**default):
from functools import wraps
def decorator(f):
#wraps(f)
def wrapper(*args, **kwargs):
from inspect import getfullargspec
f_args = getfullargspec(f)[0]
used_args = f_args[:len(args)]
final_kwargs = {
key: value
for key, value in {**default, **kwargs}.items()
if key not in used_args
}
return f(*args, **final_kwargs)
return wrapper
return decorator
In this solution, f_args is a list containing the names of all named positional arguments of f. Then used_args is the list of all parameters that have effectively been passed as positional arguments. Therefore final_kwargs is defined almost exactly like before, except that it checks if the argument (in the case above, a) was already passed as a positional argument.
For instance, this solution works beautifully with functions such as the following.
#default_kwargs(a='a', b='b', d='d')
def f(x, a, b, *args, c='c', d='not d', **kwargs):
print(f'x = {x}')
print(f'a = {a}')
print(f'b = {b}')
for idx, arg in enumerate(args):
print(f'arg{idx} = {arg}')
print(f'c = {c}')
for key, value in kwargs.items():
print(f'{key} = {value}')
f(1)
f(1, 2)
f(1, b=3)
f(1, 2, 3, 4)
f(1, 2, 3, 4, 5, c=6, g=7)
Note also that the default values passed in default_kwargs have higher precedence than the ones defined in f. For example, the default value for d in this case is actually 'd' (defined in default_kwargs), and not 'not d' (defined in f).
You can unpack values of dict:
from collections import OrderedDict
def f(x, a, b):
print(x, a, b)
d = OrderedDict({'a': 3, 'b':4})
f(10, *d.values())
UPD.
Yes, it's possible to implement this mad idea of modifying local scope by creating decorator which will return class with overriden __call__() and store your defaults in class scope, BUT IT'S MASSIVE OVERKILL.
Your problem is that you're trying to hide problems of your architecture behind those tricks. If you store your default values in dict, then access to them by key. If you want to use keywords - define class.
P.S. I still don't understand why this question collect so much upvotes.
Sure...
hope this helps
def funcc(x, **kwargs):
locals().update(kwargs)
print(x, a, b, c, d)
kwargs = {'a' : 1, 'b' : 2, 'c':1, 'd': 1}
x = 1
funcc(x, **kwargs)
Given three or more variables, I want to find the name of the variable with the min value.
I can get the min value from the list, and I can get the index within the list of the min value. But I want the variable name.
I feel like there's another way to go about this that I'm just not thinking of.
a = 12
b = 9
c = 42
cab = [c,a,b]
# yields 9 (the min value)
min(cab)
# yields 2 (the index of the min value)
cab.index(min(cab))
What code would yield 'b'?
The magic of vars prevents you from having to make a dictionary up front if you want to have things in instance variables:
class Foo():
def __init__(self, a, b, c):
self.a = a
self.b = b
self.c = c
def min_name(self, names = None):
d = vars(self)
if not names:
names = d.keys()
key_min = min(names, key = (lambda k: d[k]))
return key_min
In action
>>> x = Foo(1,2,3)
>>> x.min_name()
'a'
>>> x.min_name(['b','c'])
'b'
>>> x = Foo(5,1,10)
>>> x.min_name()
'b'
Right now it'll crash if you pass an invalid variable name in the parameter list for min_name, but that's resolvable.
You can also update the dictionary and it's reflected in the source
def increment_min(self):
key = self.min_name()
vars(self)[key] += 1
Example:
>>> x = Foo(2,3,4)
>>> x.increment_min()
>>> x.a
3
You cannot get the name of the variable with the minimum/maximum value like this*, since as #jasonharper commented: cab is nothing more than a list containing three integers; there is absolutely no connection to the variables that those integers originally came from.
A simple workaround is to user pairs, like this:
>>> pairs = [("a", 12), ("b", 9), ("c", 42)]
>>> min(pairs)
('b', 9)
>>> min(pairs)[0]
'b'
See Green Cloak Guy's answer, but if you want to go for readability, I suggest following a similar approach to mine.
You'd have to get very creative for this to work, and the only solution I can think of is rather inefficient.
You can get the memory address of the data b refers to fairly easily:
>>> hex(id(b))
'0xaadd60'
>>> hex(id(cab[2]))
'0xaadd60'
To actually correspond that with a variable name, though, the only way to do that would be to look through the variables and find the one that points to the right place.
You can do this by using the globals() function:
# get a list of all the variable names in the current namespace that reference your desired value
referent_vars = [k for k,v in globals().items() if id(v) == id(cab[2])]
var_name = referent_vars[0]
There are two big problems with this solution:
Namespaces - you can't put this code in a function, because if you do that and then call it from another function, then it won't work.
Time - this requires searching through the entire global namespace.
The first problem could be alleviated by additionally passing the current namespace in as a variable:
def get_referent_vars(val, globals):
return [k for k,v in globals.items() if id(v) == id(val)]
def main():
a = 12
b = 9
c = 42
cab = [a, b, c]
var_name = get_referent_vars(
cab[cab.index(min(cab))],
globals()
)[0]
print(var_name)
# should print 'b'
d = {}
d[3] = 0
d[1] = 4
I tried
mask = d > 1 # TypeError: '>' not supported between instances of 'dict' and 'int'
mask = d.values > 1 # TypeError: '>' not supported between instances of 'builtin_function_or_method' and 'int'
Both aren't correct. Is it possible to perform the computation without using dictionary comprehension?
The desired output would be:
{3: False, 1: True}
I feel like what you want is the ability to actually write d < 5 and magically get a new dictionary (which I don't think is possible with plain dict()). But on the other hand I thought this was a great idea, so I implemented a first version:
"""Here is our strategy for implementing this:
1) Inherit the abstract Mapping which define a
set of rules — interface — we will have to respect
to be considered a legitimate mapping class.
2) We will implement that by delegating all the hard
work to an inner dict().
3) And we will finally add some methods to be able
to use comparison operators.
"""
import collections
import operator
"Here is step 1)"
class MyDict(collections.abc.MutableMapping):
"step 2)"
def __init__(self, *args):
self.content = dict(*args)
# All kinds of delegation to the inner dict:
def __iter__(self): return iter(self.content.items())
def __len__(self): return len(self.content)
def __getitem__(self, key): return self.content[key]
def __setitem__(self, key, value): self.content[key] = value
def __delitem__(self, key): del self.content[key]
def __str__(self): return str(self.content)
"And finally, step 3)"
# Build where function using the given comparison operator
def _where_using(comparison):
def where(self, other):
# MyDict({...}) is optional
# you could just decide to return a plain dict: {...}
return MyDict({k: comparison(v, other) for k,v in self})
return where
# map all operators to the corresponding "where" method:
__lt__ = _where_using(operator.lt)
__le__ = _where_using(operator.le)
__eq__ = _where_using(operator.eq)
__gt__ = _where_using(operator.gt)
__ge__ = _where_using(operator.ge)
We can use this the way you asked for:
>>> d = MyDict({3:0, 1:4})
>>> print(d)
{3: 0, 1: 4}
>>> print(d > 1)
{3: False, 1: True}
Note that this would also work on other types of (comparable) objects:
>>> d = MyDict({3:"abcd", 1:"abce"})
>>> print(d)
{3: 'abcd', 1: 'abce'}
>>> print(d > "abcd")
{3: False, 1: True}
>>> print(d > "abcc")
{3: True, 1: True}
Here's an easy way for you to use something like d<5. You just need:
import pandas as pd
res = pd.Series(d) < 4
res.to_dict() # returns {3: True, 1: False}`
I have a numpy array that contains a list of objects.
x = np.array([obj1,obj2,obj3])
Here is the definition of the object:
class obj():
def __init__(self,id):
self.id = id
obj1 = obj(6)
obj2 = obj(4)
obj3 = obj(2)
Instead of accessing the numpy array based on the position of the object, i want to access it based on the value of id.
For example:
# x[2] = obj3
# x[4] = obj2
# x[6] = obj1
After doing some research, I learned that i could make a structured array:
x = np.array([(3,2,1)],dtype=[('2', 'i4'),('4', 'i4'), ('6', 'i4')])
# x['2'] --> 3
However, the problem with this is that i want the array to take integers as indexes, and dtypes must have a name of type str. Furthermore, i don't think structured arrays can be lists of objects.
You should be able to use filter() here, along with a lambda expression:
np.array(filter(lambda o: o.id == 1, x))
However, as filter() returns a list (in Python 3+, it should return an iterator), you may want to generate a new np.array from the result.
But this does not take care of duplicate keys, if you want to access your data key-like. It is possible to have more than one object with the same id attribute. You might want to control uniqueness of keys.
If you only want to be able to access subarrays "by-index" (e.g. x[2, 4]), with index as id, then you could simply create your own struct:
import collections
class MyArray (collections.OrderedDict):
def __init__ (self, values):
super(MyArray, self).__init__ ((v.id, v) for v in values)
def __rawgetitem (self, key):
return super (MyArray, self).__getitem__ (key)
def __getitem__ (self, key):
if not hasattr (key, '__iter__'):
key = (key, )
return MyArray (self.__rawgetitem (k) for k in key)
def __repr__ (self):
return 'MyArray({})'.format(', '.join('{}: {}'.format(k, self.__rawgetitem(k)) for k in self.keys()))
>>> class obj():
... def __init__(self,id):
... self.id = id
... def __repr__ (self):
... return "obj({})".format(self.id)
...
>>> obj1 = obj(6)
>>> obj2 = obj(4)
>>> obj3 = obj(2)
>>> x = MyArray([obj1, obj2, obj3])
>>> x
MyArray({2: obj(2), 4: obj(4), 6: obj(6)})
>>> x[4]
obj(4)
>>> x[2, 4]
MyArray({2: obj(2), 4: obj(4)})