When going from one API to another it can sometimes be helpful to map between similar keywords in each API, allowing one controller API to flexibly dispatch to other libraries without needing the user to fuss around with the different API's under the hood.
Assume some library, other_api, has a method called "logarithm", and the keyword argument for the base is something I need to factor out of my code, like "log_base_val"; so that to use it from other_api I need to type (for example):
other_api.logarithm(log_base_val=math.e)
Consider a toy class like this:
import other_api
import math
import functools
class Foo(object):
_SUPPORTED_ARGS = {"base":"log_base_val"}
def arg_binder(self, other_api_function_name, **kwargs):
other_api_function = getattr(other_api, other_api_function_name)
other_api_kwargs = {_SUPPORTED_ARGS[k]:v for k,v in kwargs.iteritems()}
return functools.partial(other_api_function, **other_api_kwargs)
With Foo, I can map some other API, where this argument is always called base, like this:
f = Foo()
ln = f.arg_binder("logarithm", base=math.e)
and ln is logically equivalent to (with log_base_val=math.e in kwargs, from functools):
other_api.logarithm(*args, **kwargs)
However, manually making the same argument bind by invoking functools will lead to different function objects:
In [10]: import functools
In [11]: def foo(a, b):
....: return a + b
....:
In [12]: f1 = functools.partial(foo, 2)
In [13]: f2 = functools.partial(foo, 2)
In [14]: id(f1)
Out[14]: 67615304
In [15]: id(f2)
Out[15]: 67615568
So testing for f1 == f2 won't succeed as intended:
In [16]: f1 == f2
Out[16]: False
So the question is: what is the prescribed way to test whether the argument binding function has resulted in the correct output function object?
The func attribute on the partial() object is a reference to the original function object:
f1.func is f2.func
Function objects themselves don't implement a __eq__ method, so you may as well just use is to test for identity.
Similarly, the partial().args and partial().keywords contain the arguments and keyword arguments to be passed to the function when called.
Demo:
>>> from functools import partial
>>> def foo(a, b):
... return a + b
...
>>> f1 = partial(foo, 2)
>>> f2 = partial(foo, 2)
>>> f1.func is f2.func
True
>>> f1.args
(2,)
>>> f2.args
(2,)
>>> f1.keywords is None
True
>>> f2.keywords is None
True
Related
class Main(object):
def __init__(self, config):
selt.attributes = config
def return_new_copy(self, additional_attributes):
addtional_attributes.update(self.attributes)
return Main(additional_attributes)
I want to update the instance attributes and return a new instance of the same class. I guess I am trying to find out if the above code is Pythonic or if it's a dirty approach. I can't use classmethod for several reasons not mentioned here. Is there another recommended approach.
Your return_new_copy modifies the parameter passed in which is probably undesirable. It also overrides in the wrong direction (giving precedence to self.attributes)
I'd write it as follows:
def return_new_copy(self, additional_attributes):
# python<3.5 if there are only string keys:
# attributes = dict(self.attributes, **additional_attributes)
# python<3.5 if there are non-string keys:
# attributes = self.attributes.copy()
# attributes.update(additional_attributes)
# python3.5+
attributes = {**self.attributes, **additional_attributes}
return type(self)(attributes)
A few subtleties:
- I make sure to copy both the input attributes and the self attributes
- I merge the additional attributes on top of the self attributes
If you're looking for something to do this automatically, you might want to check out namedtuple
For example:
>>> C = collections.namedtuple('C', ('a', 'b'))
>>> x = C(1, 2)
>>> x
C(a=1, b=2)
>>> y = x._replace(b=3)
>>> y
C(a=1, b=3)
>>> x
C(a=1, b=2)
I am trying to get a single Handler of a specific custom type MemoryListHandler in the logger.handlers collection.
With .NET I would simply use the following LINQ extension, which filters element and returns only those of type MemoryListHandler:
logger.handlers.OfType<MemoryListHandler>().SingleOrDefault()
What would be the most elegant equivalent in Python?
My current (not very neat) attempt is:
next((handler for handler in logger.handlers if handler is MemoryListHandler), None)
You might try the index method.
try:
lh = logger.handlers
x = lh[lh.index(MemoryListHandler)]
except ValueError:
x = some_default_value
Python is dynamically typed, therefore you might not need to convert anything.
However, in some cases you still might need to convert, say, int to string :
map(lambda x: str(x), [1, 2, 3])
Or, given your function accepts only one argument, just pass the function alone :
map(str, [1, 2, 3])
Update
filter(lambda x: type(x) == YourClass, [your_array])
For Python the is operator tests identity NOT type like it does in c#. You want isinstance for your test -- which will also work with subtypes of the target_type you're looking for.
Using the Python REPL to illustrate the difference between is and isinstance:
>>> s = ""
>>> s is str
False
>>> isinstance(s, str)
True
>>> class Foo:
... def __init__(self):
... pass
...
>>> f = Foo()
>>> g = Foo()
>>> f is g
False
>>> f is Foo
False
>>> g is Foo
False
>>> x = f
>>> f is x
True
>>> g is x
False
Your own expression is pretty close to what you want. You can hide it behind a method:
def first_of_type(xs, target_type):
return next((x for x in xs if isinstance(x, target_type)), None)
Usage becomes short and sweet:
first_of_type(logger.handlers, MemoryListHandler)
Note: addition of type hints and doc comments would help usability.
Let's say there's a function in a Python library (let's call it mymodule):
def some_func(a,b='myDefaultValue'):
return some_computation
and then there's another function in another module that calls it,
import mymodule
def wrapper(a,b):
return some_transform(mymodule.some_func(a,b))
How do I make it such that wrapper inherits some_func's default value for the b parameter? I could do something like:
def wrapper(a,b=None):
if b:
return some_transform(some_func(a,b))
else:
return some_transform(some_func(a))
but that seems needlessly cumbersome, leads to a combinatorial explosion of possibilities with multiple optional arguments, and makes it so I can't explicitly pass in None to wrapper.
Is there a way of getting the default args for a function, or is common practice to simply pull that value out into a shared constant that both function declarations can make use of?
You can use func_defaults:
https://docs.python.org/2/library/inspect.html?highlight=func_defaults#types-and-members
func_defaults tuple of any default values for arguments
def some_func(a,b='myDefaultValue'):
print a, b
def wrapper(a,b):
b = some_func.func_defaults[0] if b is None else b
some_func(a,b)
print "b is 'there'"
a = "hello"
b = "there"
wrapper(a,b)
print "b is 'None'"
b = None
wrapper(a,b)
output:
b is 'there'
hello there
b is 'None'
hello myDefaultValue
EDIT: To answer your question from the comments, there isn't anything built-in to look up the arguments of the function with default values by name. However, you know that the arguments with default values have to come after the non-optional arguments. So if you know the total number of arguments you have, and how many of them have default values, you can subtract the 2 to get the starting point of the arguments with default values. Then you can zip the list of arguments (starting at the previously calculated argument index) together with the list of default argument values and create a dictionary from the list. Use the inspect module to get all of the information you need:
Like so:
>>> import inspect
>>> def some_func(a,b,c,d="dee",e="ee"):
... print a,b,c,d,e
...
>>> some_func("aaa","bbb","ccc",e="EEE")
aaa bbb ccc dee EEE
>>> some_funcspec = inspect.getargspec(some_func)
>>> some_funcspec
ArgSpec(args=['a', 'b', 'c', 'd', 'e'], varargs=None, keywords=None, defaults=('dee', 'ee'))
>>> defargsstartindex = len(some_funcspec.args) - len(some_funcspec.defaults)
>>> defargsstartindex
3
>>> namedargsdict = dict(zip([key for key in some_funcspec.args[defargsstartindex:]], list(some_funcspec.defaults)))
>>> namedargsdict
{'e': 'ee', 'd': 'dee'}
In the example above, namedargsdict is your list of arguments with default values for some_func.
Further reading:
https://docs.python.org/2/library/inspect.html#inspect.getargspec
inspect.getargspec(func) Get the names and default values of a Python
function’s arguments. A tuple of four things is returned: (args,
varargs, keywords, defaults). args is a list of the argument names (it
may contain nested lists). varargs and keywords are the names of the *
and ** arguments or None. defaults is a tuple of default argument
values or None if there are no default arguments; if this tuple has n
elements, they correspond to the last n elements listed in args.
Changed in version 2.6: Returns a named tuple ArgSpec(args, varargs,
keywords, defaults).
You can use argument unpacking to accomplish this:
In [1]: def some_func(a,b='myDefaultValue'):
...: print a, b
...:
In [2]: def wrapper(a, *args, **kwargs):
...: some_func(a, *args, **kwargs)
...:
In [3]: some_func('foo', 'bar')
foo bar
In [4]: some_func('baz')
baz myDefaultValue
In [5]: wrapper('foo', 'bar')
foo bar
In [6]: wrapper('baz')
baz myDefaultValue
If you plan to wrap multiple functions in this way, you might consider making wrapper a decorator:
In [1]: from functools import wraps
In [2]: def wrapper(func):
...: #wraps(func)
...: def decorated(a, *args, **kwargs):
...: print 'wrapper invoked with a = {}'.format(a)
...: return func(a, *args, **kwargs)
...: return decorated
...:
In [3]: #wrapper
...: def some_func(a, b='myDefaultValue'):
...: print a, b
...:
In [4]: some_func('foo', 'bar')
wrapper invoked with a = foo
foo bar
In [5]: some_func('baz')
wrapper invoked with a = baz
baz myDefaultValue
Your question is not clear enough. But as far as I understand from your question, you should use class so that you can easily share values among multiple functions inside the class
class SomeClass:
def __init__(self, b='defaultvalue'):
self.b = b
def some_func(self, a, b):
pass
def wrapper(self, a):
self.some_func(a, self.b)
I have a lambda function which is passed to a object and stored as a variable:
f = lambda x: x.method_foo()
I want to determine the name of method called on the variable x as a string. So I want
method_foo
saved as a string.
Any help appreciated.
You could access the lambda's code object with func_code, and access the code's local names with co_names.
>>> f = lambda x: x.method_foo
>>> f.func_code.co_names
('method_foo',)
>>> f.func_code.co_names[0]
'method_foo'
This is a bit "crazy", but you can use pass a Mock to f and get the method that was added to the mock after calling the function:
>>> from mock import Mock
>>> f = lambda x: x.method_foo
>>> m = Mock()
>>> old_methods = dir(m)
>>> f(m)
<Mock name='mock.method_foo' id='4517582608'>
>>> new_methods = dir(m)
>>> next(method for method in new_methods if method not in old_methods)
'method_foo'
It seems pretty simple:
# builtins work fine:
>>> map (str, [(), (), ()])
['()', '()', '()']
# but no luck for class methods:
>>> class C (object):
... def m(self):
... return 42
...
>>> c = C()
>>> map(c.m, [(), (), ()])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: m() takes exactly 1 argument (2 given)
You need to add a parameter to your m method, where the argument of the map will be passed.
class C (object):
def m(self, x):
return 42
>>> c = C()
>>> map(c.m, [(), (), ()])
[42, 42, 42]
See, c.m is a bound method, already like calling m(c), you need a placeholder for the additional parameter passed by map
c and the argument passed by map are the 2 arguments to m your stack trace is complaining about:
TypeError: m() takes exactly 1 argument (2 given)
map(f, L) always calls f with a single argument whose values are taken from L. It's always a single argument, never zero. The ()s in the list are not argument lists, they are empty tuples. Outside of a function call, things in parentheses aren't arguments to a function, they are objects called "tuples" (think of them as immutable lists). Check the difference between str() and str(()) - str with no arguments gives '' and not '()'.
If you have tuples of arguments and want to call a callable (function or method) with these arguments, you can use itertools.starmap. In particular, if you pass empty tuples the functions will be called with no arguments. It returns an iterator, so if you need a list you need to explicitly use list() over the result
>>> import itertools
>>> f = lambda: 42
>>> L = [(), (), ()]
>>> values = itertools.starmap(f, L)
>>> print list(values)
[42, 42, 42]
In the general case, it works with any tuple of arguments:
>>> f = lambda *x: sum(x)
>>> L = [(1,2), (4, ), (5,6)]
>>> values = itertools.starmap(f, L)
>>> print list(values)
[3, 4, 11]
If you want to simply call a function multiple times and get the result, you might consider using a list comprehension or a generator expression instead.
>>> f = lambda: 42
>>> [f() for _ in xrange(3)]
[42, 42, 42]
>>> values = (f() for _ in xrange(3))
>>> print list(values)
[42, 42, 42]
If you have a list of empty tuples like in your example, you might use xrange(len(L)) in the place of xrange(3).
That's not a class method, it lacks the classmethod decorator and self should be cls. But you don't want a class method here anyway, as class methods are methods which operate on classes (you can pass other objects, of course, but that's not the intended use case - the #classmethod would be grossly misleading).
You're looking for the term "unbound method", which you get by refering to a member of the class, not to an instance thereof. Use C.m. Note of course that the method will be called with self as (in your example) a tuple and not an instance of C. Normally, such trickery should be restricted to avoid this (e.g. str.lower and a bunch of strings is O.K.).
First, that's not a class method. A class method takes the class as it's first argument, and is called on the class, not an instance of it:
class C(object):
#classmethod
def m(cls):
return 42
map(C.m, range(10))
However, that will still break because map passes in each item from the iterable to the function, and your method only accepts one argument, the class.
If you change your method to accept the extra argument (def m(cls, arg)), it will work. You could also use an instance method instead of a class method:
class C(object):
def m(self, *args): # or def m(self, n)
return 42
c = C()
map(c.m, range(10))
You forgot your decorator to make it a class method, but you probably want a static method:
static:
class C(object):
#staticmethod
def m(arg):
return 42
class:
class C(object):
#classmethod
def m(cls, arg):
#cls is a reference to the actual "C" type, not an instance of the "C" class.
return 42
m is a 1-argument method that takes an object of type C. The syntax "c.m" is actually equivalent to "m(c)", which is just 42. But 42 is not a function you can map over a list like [(),(),()].
The following should work:
class C (object):
def f(self): return lambda x: x+1
two,three,four = map(C().f(), [1,2,3])
Now C().* returns a function, instead of a constant.