Returning multiple objects from a pytest fixture - python

I am learning how to use pytest by testing a simple event emitter implementation.
Basically, it looks like this
class EventEmitter():
def __init__(self):
...
def subscribe(self, event_map):
# adds listeners to provided in event_map events
def emit(self, event, *args):
# emits event with given args
For convenience, I created a Listener class that is used in tests
class Listener():
def __init__(self):
...
def operation(self):
# actual listener
Currently, test looks the following way
#pytest.fixture
def event():
ee = EventEmitter()
lstr = Listener()
ee.subscribe({"event" : [lstr.operation]})
return lstr, ee
def test_emitter(event):
lstr = event[0]
ee = event[1]
ee.emit("event")
assert lstr.result == 7 # for example
In order to test event emitter, I need to check whether the inner state of the listener has changed after event propagation. Thus, I need two objects and I wonder if there is a better way to do this (maybe use two fixtures instead of one somehow) because this looks kinda ugly to me.

Usually in order to avoid tuples and beautify your code, you can join them back together to one unit as a class, which has been done for you, using collections.namedtuple:
import collections
EventListener = collections.namedtuple('EventListener', 'event listener')
Now modify your fixture:
#pytest.fixture
def event_listener():
e = EventListener(EventEmitter(), Listener())
e.event.subscribe({'event' : [e.listener.operation]})
return e
Now modify your test:
def test_emitter(event_listener):
event_listener.event.emit('event')
assert event_listener.listener.result == 7

You should use a Python feature called iterable unpacking into variables.
def test_emitter(event):
lstr, ee = event # unpacking
ee.emit("event")
assert lstr.result == 7
Basically, you are assigning event[0] to lstr, and event[1] to ee. Using this feature is a very elegant way to avoid using indexes.
Discarding
In case you are going to use your fixture in mutiple tests, and you don't need all values in every test, you can also discard some elements of the iterable if you are not interested in using them as follows:
l = ['a', 'b', 'c', 'd']
a, b, c, d = l # unpacking all elements
a, _, c, d = l # discarding b
a, _, _, d = l # python 2: discard b and c
a, *_, d = l # python 3: discard b and c
a, _, _, _ = l # python2: discard, b, c and d
a, *_ = l # python3: discard b, c, and d
In theory, you are not literally discarding the values, but in Python _, so-called “I don’t care”, is used for ignoring the specific values.

You will probably need two fixtures in this case.
You can try the #pytest.yield_fixture like:
#pytest.yield_fixture
def event():
...
yield <event_properties>
#pytest.yield_fixture
def listener(event):
...
yield <listener_properties>
Note: this is now deprecated https://docs.pytest.org/en/latest/yieldfixture.html

If you can not afford to easily split your tuple fixture into two independent fixtures, you can now "unpack" a tuple or list fixture into other fixtures using my pytest-cases plugin as explained in this answer.
For your example that would look like:
from pytest_cases import pytest_fixture_plus
#pytest_fixture_plus(unpack_into="lstr,ee")
def event():
ee = EventEmitter()
lstr = Listener()
ee.subscribe({"event" : [lstr.operation]})
return lstr, ee
def test_emitter(lstr, ee):
ee.emit("event")
assert lstr.result == 7 # for example

I landed here when searching for a similar topic.
Due to lack of reputation I cannot comment on the answer from #lmiguelvargasf (https://stackoverflow.com/a/56268344/2067635) so I need to create a separate answer.
I'd also prefer returning multiple values and unpacking them to individual variables. This results in concise, Pythonic code.
There is one caveat, though:
For tests that rely on fixtures with autouse=True, this results in a TypeError
Example:
#pytest.fixture(autouse=True)
def foo():
return 1, 2
# Gets fixture automagically through autouse
def test_breaks():
arg1, arg2 = foo
assert arg1 <= arg2
# Explicit request for fixture foo
def test_works(foo):
arg1, arg2 = foo
assert arg1 <= arg2
test_breaks
FAILED [100%]
def test_breaks():
> arg1, arg2 = foo
E TypeError: cannot unpack non-iterable function object
test_works
=============== 1 passed in 0.43s ===============
Process finished with exit code 0
PASSED [100%]
The fix is easy. But it took me some time to figure out what the problem was, so I thought I share my findings.

Related

Pytest - combining multiple fixture parameters into a single fixture to optimise fixture instantiation

I have an existing pytest test that makes use of some predefined lists to test the cross-product of them all:
A_ITEMS = [1, 2, 3]
B_ITEMS = [4, 5, 6]
C_ITEMS = [7, 8, 9]
I also have an expensive fixture that has internal conditions dependent on A and B items (but not C), called F:
class Expensive:
def __init__(self):
# expensive set up
time.sleep(10)
def check(self, a, b, c):
return True # keep it simple, but in reality this depends on a, b and c
#pytest.fixture
def F():
return Expensive()
Currently I have a naive approach that simply parametrizes a test function:
#pytest.mark.parametrize("A", A_ITEMS)
#pytest.mark.parametrize("B", B_ITEMS)
#pytest.mark.parametrize("C", C_ITEMS)
def test_each(F, A, B, C):
assert F.check(A, B, C)
This tests all combinations of F with A, B and C items, however it constructs a new Expensive instance via the F fixture for every test. More specifically, it reconstructs a new Expensive via fixture F for every combination of A, B and C.
This is very inefficient, because I should only need to construct a new Expensive when the values of A and B change, which they don't between all tests of C.
What I would like to do is somehow combine the F fixture with the A_ITEMS and B_ITEMS lists, so that the F fixture only instantiates a new instance once for each run through the values of C.
My first approach involves separating the A and B lists into their own fixtures and combining them with the F fixture:
class Expensive:
def __init__(self, A, B):
# expensive set up
self.A = A
self.B = B
time.sleep(10)
def check(self, c):
return True # keep it simple
#pytest.fixture(params=[1,2,3])
def A(request):
return request.param
#pytest.fixture(params=[4,5,6])
def B(request):
return request.param
#pytest.fixture
def F(A, B):
return Expensive(a, b)
#pytest.mark.parametrize("C", C_ITEMS)
def test_each2(F, C):
assert F.check(C)
Although this tests all combinations, unfortunately this creates a new instance of Expensive for each test, rather than combining each A and B item into a single instance that can be reused for each value of C.
I've looked into indirect fixtures, but I can't see a way to send multiple lists (i.e. both the A and B items) to a single fixture.
Is there a better approach I can take with pytest? Essentially what I'm looking to do is minimise the number of times Expensive is instantiated, given that it's dependent on values of item A and B.
Note: I've tried to simplify this, however the real-life situation is that F represents creation of a new process, A and B are command-line parameters for this process, and C is simply a value passed to the process via a socket. Therefore I want to be able to send each value of C to this process without recreating it every time C changes, but obviously if A or B change, I need to restart it (as they are command-line parameters to the process).
I've had some success using a more broadly scoped fixture (module or session) as "cache" for the per-test fixture for this sort of situation where the "lifetimes" of the fixtures proper don't align cleanly with amortised costs or whatever.
If using pytest scoping (as proposed in the other answer) is not an option, you may try to cache the expansive object, so that it will be constructed only if needed.
Basically, this expands the proposal given in the question with an additional static caching of the last used parameters to avoid creating a new Expansive if not needed:
#pytest.fixture(params=A_ITEMS)
def A(request):
return request.param
#pytest.fixture(params=B_ITEMS)
def B(request):
return request.param
class FFactory:
lastAB = None
lastF = None
#classmethod
def F(cls, A, B):
if (A, B) != cls.lastAB:
cls.lastAB = (A, B)
cls.lastF = Expensive(A, B)
return cls.lastF
#pytest.fixture
def F(A, B):
return FFactory.F(A, B)
#pytest.mark.parametrize("C", C_ITEMS)
def test_each(F, C):
assert F.check(C)

Default values for iterable unpacking

I've often been frustrated by the lack of flexibility in Python's iterable unpacking.
Take the following example:
a, b = range(2)
Works fine. a contains 0 and b contains 1, just as expected. Now let's try this:
a, b = range(1)
Now, we get a ValueError:
ValueError: not enough values to unpack (expected 2, got 1)
Not ideal, when the desired result was 0 in a, and None in b.
There are a number of hacks to get around this. The most elegant I've seen is this:
a, *b = function_with_variable_number_of_return_values()
b = b[0] if b else None
Not pretty, and could be confusing to Python newcomers.
So what's the most Pythonic way to do this? Store the return value in a variable and use an if block? The *varname hack? Something else?
As mentioned in the comments, the best way to do this is to simply have your function return a constant number of values and if your use case is actually more complicated (like argument parsing), use a library for it.
However, your question explicitly asked for a Pythonic way of handling functions that return a variable number of arguments and I believe it can be cleanly accomplished with decorators. They're not super common and most people tend to use them more than create them so here's a down-to-earth tutorial on creating decorators to learn more about them.
Below is a decorated function that does what you're looking for. The function returns an iterator with a variable number of arguments and it is padded up to a certain length to better accommodate iterator unpacking.
def variable_return(max_values, default=None):
# This decorator is somewhat more complicated because the decorator
# itself needs to take arguments.
def decorator(f):
def wrapper(*args, **kwargs):
actual_values = f(*args, **kwargs)
try:
# This will fail if `actual_values` is a single value.
# Such as a single integer or just `None`.
actual_values = list(actual_values)
except:
actual_values = [actual_values]
extra = [default] * (max_values - len(actual_values))
actual_values.extend(extra)
return actual_values
return wrapper
return decorator
#variable_return(max_values=3)
# This would be a function that actually does something.
# It should not return more values than `max_values`.
def ret_n(n):
return list(range(n))
a, b, c = ret_n(1)
print(a, b, c)
a, b, c = ret_n(2)
print(a, b, c)
a, b, c = ret_n(3)
print(a, b, c)
Which outputs what you're looking for:
0 None None
0 1 None
0 1 2
The decorator basically takes the decorated function and returns its output along with enough extra values to fill in max_values. The caller can then assume that the function always returns exactly max_values number of arguments and can use fancy unpacking like normal.
Here's an alternative version of the decorator solution by #supersam654, using iterators rather than lists for efficiency:
def variable_return(max_values, default=None):
def decorator(f):
def wrapper(*args, **kwargs):
actual_values = f(*args, **kwargs)
try:
for count, value in enumerate(actual_values, 1):
yield value
except TypeError:
count = 1
yield actual_values
yield from [default] * (max_values - count)
return wrapper
return decorator
It's used in the same way:
#variable_return(3)
def ret_n(n):
return tuple(range(n))
a, b, c = ret_n(2)
This could also be used with non-user-defined functions like so:
a, b, c = variable_return(3)(range)(2)
Shortest known to me version (thanks to #KellyBundy in comments below):
a, b, c, d, e, *_ = *my_list_or_iterable, *[None]*5
Obviously it's possible to use other default value than None if necessary.
Also there is one nice feature in Python 3.10 which comes handy here when we know upfront possible numbers of arguments - like when unpacking sys.argv
Previous method:
import sys.argv
_, x, y, z, *_ = *sys.argv, *[None]*3
New method:
import sys
match sys.argv[1:]: #slice needed to drop first value of sys.argv
case [x]:
print(f'x={x}')
case [x,y]:
print(f'x={x}, y={y}')
case [x,y,z]:
print(f'x={x}, y={y}, z={z}')
case _:
print('No arguments')

Checking if the value of a string is true - python

I am trying to see if a string is true or false but im stuck... In the following code I have made a list of statements that I want to evaluate, I have put them in strings because the variables need to be declared after the list is .(Reason is that the list of statements to be evaluated are being passed as a parameter to a function, in this function the variables are declared) I want to run through all of the items in the list and if it is a true statement then do something... Here is my code: (There is no output generated)
a = ['b > c', 'b = c', 'b < c']
b = 5
c = 3
for item in a:
if exec(item):
print(item)
exec is a function in python3.x1 that returns None so you'll always have a falsy result. You probably want eval.
Also be careful here. Do not use this unless you completely trust the input strings as it will allow execution of arbitrary code otherwise.
Note that this is a very strange code design and there is probably a better way to accomplish what you want... For example:
Why do you want to define the strings before the variables are defined? Coupling strings to names in your code in this way is likely to lead to a painful code maintenance experience.
1In python2.x, this would fail with a SyntaxError since exec was a statement prior to python3.x
Having tried to understand your use-case a little more, I would propose that you create an API where you pass functions.
def f1(a, b, **kwargs):
return a > b
def f2(a, b, **kwargs):
return a == b
def f3(a, c, **kwargs):
return a <= c
funcs = [f1, f2, f3]
Now you can define a function that will pass the parameters. You'll need to define which parameters it intends to pass -- but it will always pass them all:
def func_caller(funcs):
param_map = {
'a': get_a_somehow(),
'b': get_b_somehow(),
'c': get_c_somehow(),
...
}
for func in funcs:
if func(**param_map):
print("Hello World!")
There are other way to make the "contract" between func_caller and the functions that it is calling even more binding (e.g. pass the params as a more structured object like a namedtuple).
from collections import namedtuple
FuncCallerParams = namedtuple('FuncCallerParams', 'a,b,c')
def f1(func_caller_params):
return func_caller_params.a > func_caller_params.b
...
funcs = [f1, f2, ...]
def func_caller(funcs):
a = ...
b = ...
c = ...
fcp = FuncCallerParams(a, b, c)
for func in funcs:
if func(fcp):
...

What is the fastest way to create a function if its creation depends upon some other input parameters?

I have an API that supports (unfortunately) too many ways for a user to give inputs. I have to create a function depending upon the type of inputs. Once the function has been created (lets call it foo), it is run many times (around 10^7 times).
The user first gives 3 inputs - A, B and C that tell us about the types of input given to our foo function. A can have three possible values, B can have four and C can have 5. Thus giving me 60 possible permutations.
Example - if A is a list then the first parameter for foo will be a type list and similarly for B and C.
Now I cannot make these checks inside the function foo for obvious reasons. Hence, I create a function create_foo that checks for all the possible combinations and returns a unique function foo. But the drawback with this approach is that I have to write the foo function 60 times for all the permutations.
An example of this approach -
def create_foo(A,B,C):
if A=='a1' and B=='b1' and C=='c1':
def foo(*args):
/*calls some functions that parse the given parameters*/
/*does something*/
return foo
if A=='a2' and B=='b1' and C=='c1':
def foo(*args):
/*calls some functions that parse the given parameters*/
/*does something*/
return foo
if A=='a3' and B=='b1' and C=='c1':
def foo(*args):
/*calls some functions that parse the given parameters*/
/*does something*/
return foo
.
.
.
.(60 times)
.
.
.
if A=='a3' and B=='b4' and C=='c5':
def foo(*args):
/*calls some functions that parse the given parameters*/
/*does something*/
return foo
The foo function parses the parameters differently every time, but then it performs the same task after parsing.
Now I call the function
f = create_foo(A='a2',B='b3',C='c4')
Now foo is stored in f and f is called many times. Now f is very time efficient but the problem with this approach is the messy code which involves writing the foo function 60 times.
Is there a cleaner way to do this? I cannot compromise on performance hence, the new method must not take more time for evaluation that this method.
Currying lamda functions to handle all this takes more time than the above method because of the extra function calls. I cannot afford it.
A,B and C are not used by the function they are only used for parsing values and they would not change after the creation of foo.
For example - if A is a type list no change is required but if its a dict it needs to call a function that parses dict to list. A,B and C only tell us about the type of parameters in *args
It's not 100% clear what you want, but I'll assume A, B, and C are the same as the *args to foo.
Here are some thoughts toward possible solutions:
(1) I wouldn't take it for granted that your 60 if statements necessarily make an impact. If the rest of your function is at all computationally demanding, you might not even notice the proportional increase in running time from 60 if statements.
(2) You could parse/sanity-check the arguments once (slowly/inefficiently) and then pass the sanity-checked versions to a single function foo that runs 10^7 times.
(3) You could write a single function foo that handles every case, but only let it parse/sanity check arguments if a certain optional keyword is provided:
def foo( A, B, C, check=False ):
if check:
pass # TODO: implement sanity-checking on A, B and C here
pass # TODO: perform computationally-intensive stuff here
Call foo(..., check=True ) once. Then call it without check=True 10^7 times.
(4) If you want to pass around multiple versions of the same callable function, each version configured with different argument values pre-filled, that's what functools.partial is for. This may be what you want, rather than duplicating the same code 60 times.
Whoa whoa whoa. You're saying you're duplicating code SIXTY TIMES in your function? No, that will not do. Here's what you do instead.
def coerce_args(f):
def wrapped(a, b, c):
if isinstance(a, list):
pass
elif isinstance(a, dict):
a = list(a.items()) # turn it into a list
# etc for each argument
return f(a, b, c)
return wrapped
#coerce_args
def foo(a, b, c):
"""foo will handle a, b, c in a CONCRETE form"""
# do stuff
Essentially you're building one decorator that's going to change a, b, and c into a known format for foo to handle the business logic on. You've already built an API where it's acceptable to call this in a number of different ways (which is bad to begin with), so you need to support it. Doing so means internally treating it as the same way every time, and providing a helper to coerce that.
given your vague explanation I will make some assumptions and work with that.
I will assume that A, B and C are mutually independent, and that they are parsed to the same type, then I give you this
def _foo(*argv,parser_A=None,parser_B=None,parser_C=None):
if parser_A is not None:
#parse your A arguments here
if parser_B is not None:
#parse your B arguments here
if parser_C is not None:
#parse your C arguments here
# do something
def foo_maker(A,B,C):
parser_A = None
parser_B = None
parser_C = None
if A == "list": #or isinstance(A,list)
pass
elif A == "dict": #or isinstance(A,dict)
#put your parse here, for example:
parser_A = lambda x: x.items()
...
#the same with B and C
return lambda *argv: _foo(*argv, parser_A=parser_A, parser_B=parser_B, parser_C=parser_C )
simple working example
def _foo(*argv,parser_A=None,parser_B=None,parser_C=None):
if parser_A is not None:
argv = parser_A(argv)
if parser_B is not None:
argv = parser_B(argv)
if parser_C is not None:
argv = parser_C(argv)
print(argv)
def foo_maker(A,B,C):
pa = None
pb = None
pc = None
if A==1:
pa = lambda x: (23,32)+x
if B==2:
pb = lambda x: list(map(str,x))
if C==3:
pc = lambda x: set(x)
return lambda *x:_foo(*x,parser_A=pa,parser_B=pb,parser_C=pc)
test
>>> f1=foo_maker(1,4,3)
>>> f2=foo_maker(1,2,3)
>>> f1(1,2,3,5,8)
{32, 1, 2, 3, 5, 8, 23}
>>> f2(1,2,3,5,8)
{'8', '23', '2', '3', '5', '1', '32'}
>>> f3=foo_maker(0,0,3)
>>> f3(1,2,3,5,8)
{8, 1, 2, 3, 5}
>>> f4=foo_maker(0,0,0)
>>> f4(1,2,3,5,8)
(1, 2, 3, 5, 8)
>>> f5=foo_maker(1,0,0)
>>> f5(1,2,3,5,8)
(23, 32, 1, 2, 3, 5, 8)
>>>
Because you don't specify what actually happens in each individual foo, there is not way to generalize how foo's are created. But there is a way to speed up demarshaling.
def create_foo(A,B,C, d={}):
if len(d) == 0:
def _111():
def foo(*args):
/*calls some functions that parse the given parameters*/
/*does something*/
return foo
d[('a1','b1','c1')] = _111
def _211():
def foo(*args):
/*calls some functions that parse the given parameters*/
/*does something*/
return foo
d[('a2','b1','c1')] = _211
....
def _345():
def foo(*args):
/*calls some functions that parse the given parameters*/
/*does something*/
return foo
d[('a3','b4','c5')] = _345
return d[(A,B,C)]()
The _XXX functions will not be evaluated each time you call create_foo. So the functions will only be created at run-time. Also the default value of d will be set when the create_foo function is defined (only once). So after the first run, the code inside the if statement will not be entered anymore and the dictionary will have all the functions ready to start initializing and returning your foo's.
Edit: if the behavior in each foo is the same (as the edit of the question seems to suggest), then rather than passing types of foo's parameters in A,B and C, maybe its better to pass in conversion functions? Then the whole thing becomes:
def create_foo(L1, L2, L3):
def foo(a,b,c):
a=L1(a)
b=L2(b)
c=L3(c)
/*does something*/
return foo
If you want to (for example) convert all 3 parameters to sets from lists, then you can call it as:
f = create_foo(set,set,set)
If you want foo to add 1 to 1st parameter, subtract 5 from second and multiply 3rd by 4, you'd say
f = create_foo(lambda x:x+1, lambda x:x-5, lambda x:x*4)

Assign function arguments to `self`

I've noticed that a common pattern I use is to assign SomeClass.__init__() arguments to self attributes of the same name. Example:
class SomeClass():
def __init__(self, a, b, c):
self.a = a
self.b = b
self.c = c
In fact it must be a common task for others as well as PyDev has a shortcut for this - if you place the cursor on the parameter list and click Ctrl+1 you're given the option to Assign parameters to attributes which will create that boilerplate code for you.
Is there a different, short and elegant way to perform this assignment?
You could do this, which has the virtue of simplicity:
>>> class C(object):
def __init__(self, **kwargs):
self.__dict__ = dict(kwargs)
This leaves it up to whatever code creates an instance of C to decide what the instance's attributes will be after construction, e.g.:
>>> c = C(a='a', b='b', c='c')
>>> c.a, c.b, c.c
('a', 'b', 'c')
If you want all C objects to have a, b, and c attributes, this approach won't be useful.
(BTW, this pattern comes from Guido his own bad self, as a general solution to the problem of defining enums in Python. Create a class like the above called Enum, and then you can write code like Colors = Enum(Red=0, Green=1, Blue=2), and henceforth use Colors.Red, Colors.Green, and Colors.Blue.)
It's a worthwhile exercise to figure out what kinds of problems you could have if you set self.__dict__ to kwargs instead of dict(kwargs).
I sympathize with your sense that boilerplate code is a bad thing. But in this case, I'm not sure there even could be a better alternative. Let's consider the possibilities.
If you're talking about just a few variables, then a series of self.x = x lines is easy to read. In fact, I think its explicitness makes that approach preferable from a readability standpoint. And while it might be a slight pain to type, that alone isn't quite enough to justify a new language construct that might obscure what's really going on. Certainly using vars(self).update() shenanigans would be more confusing than it's worth in this case.
On the other hand, if you're passing nine, ten, or more parameters to __init__, you probably need to refactor anyway. So this concern really only applies to cases that involve passing, say, 5-8 parameters. Now I can see how eight lines of self.x = x would be annoying both to type and to read; but I'm not sure that the 5-8 parameter case is common enough or troublesome enough to justify using a different method. So I think that, while the concern you're raising is a good one in principle, in practice, there are other limiting issues that make it irrelevant.
To make this point more concrete, let's consider a function that takes an object, a dict, and a list of names, and assigns values from the dict to names from the list. This ensures that you're still being explicit about which variables are being assigned to self. (I would never suggest a solution to this problem that didn't call for an explicit enumeration of the variables to be assigned; that would be a rare-earth bug magnet):
>>> def assign_attributes(obj, localdict, names):
... for name in names:
... setattr(obj, name, localdict[name])
...
>>> class SomeClass():
... def __init__(self, a, b, c):
... assign_attributes(self, vars(), ['a', 'b', 'c'])
Now, while not horribly unattractive, this is still harder to figure out than a straightforward series of self.x = x lines. And it's also longer and more trouble to type than one, two, and maybe even three or four lines, depending on circumstances. So you only get certain payoff starting with the five-parameter case. But that's also the exact moment that you begin to approach the limit on human short-term memory capacity (= 7 +/- 2 "chunks"). So in this case, your code is already a bit challenging to read, and this would only make it more challenging.
Mod for #pcperini's answer:
>>> class SomeClass():
def __init__(self, a, b=1, c=2):
for name,value in vars().items():
if name != 'self':
setattr(self,name,value)
>>> s = SomeClass(7,8)
>>> print s.a,s.b,s.c
7 8 2
Your specific case could also be handled with a namedtuple:
>>> from collections import namedtuple
>>> SomeClass = namedtuple("SomeClass", "a b c")
>>> sc = SomeClass(1, "x", 200)
>>> print sc
SomeClass(a=1, b='x', c=200)
>>> print sc.a, sc.b, sc.c
1 x 200
Decorator magic!!
>>> class SomeClass():
#ArgsToSelf
def __init__(a, b=1, c=2, d=4, e=5):
pass
>>> s=SomeClass(6,b=7,d=8)
>>> print s.a,s.b,s.c,s.d,s.e
6 7 2 8 5
while defining:
>>> import inspect
>>> def ArgsToSelf(f):
def act(self, *args, **kwargs):
arg_names,_,_,defaults = inspect.getargspec(f)
defaults=list(defaults)
for arg in args:
setattr(self, arg_names.pop(0),arg)
for arg_name,arg in kwargs.iteritems():
setattr(self, arg_name,arg)
defaults.pop(arg_names.index(arg_name))
arg_names.remove(arg_name)
for arg_name,arg in zip(arg_names,defaults):
setattr(self, arg_name,arg)
return f(*args, **kwargs)
return act
Of course you could define this decorator once and use it throughout your project.Also, This decorator works on any object function, not only __init__.
You can do it via setattr(), like:
[setattr(self, key, value) for key, value in kwargs.items()]
Is not very beautiful, but can save some space :)
So, you'll get:
kwargs = { 'd':1, 'e': 2, 'z': 3, }
class P():
def __init__(self, **kwargs):
[setattr(self, key, value) for key, value in kwargs.items()]
x = P(**kwargs)
dir(x)
['__doc__', '__init__', '__module__', 'd', 'e', 'z']
For that simple use-case I must say I like putting things explicitly (using the Ctrl+1 from PyDev), but sometimes I also end up using a bunch implementation, but with a class where the accepted attributes are created from attributes pre-declared in the class, so that I know what's expected (and I like it more than a namedtuple as I find it more readable -- and it won't confuse static code analysis or code-completion).
I've put on a recipe for it at: http://code.activestate.com/recipes/577999-bunch-class-created-from-attributes-in-class/
The basic idea is that you declare your class as a subclass of Bunch and it'll create those attributes in the instance (either from default or from values passed in the constructor):
class Point(Bunch):
x = 0
y = 0
p0 = Point()
assert p0.x == 0
assert p0.y == 0
p1 = Point(x=10, y=20)
assert p1.x == 10
assert p1.y == 20
Also, Alex Martelli also provided a bunch implementation: http://code.activestate.com/recipes/52308-the-simple-but-handy-collector-of-a-bunch-of-named/ with the idea of updating the instance from the arguments, but that'll confuse static code-analysis (and IMO can make things harder to follow) so, I'd only use that approach for an instance that's created locally and thrown away inside that same scope without passing it anywhere else).
I solved it for myself using locals() and __dict__:
>>> class Test:
... def __init__(self, a, b, c):
... l = locals()
... for key in l:
... self.__dict__[key] = l[key]
...
>>> t = Test(1, 2, 3)
>>> t.a
1
>>>
Disclaimer
Do not use this: I was simply trying to create the answer closest to OPs initial intentions. As pointed out in comments, this relies on entirely undefined behavior, and explicitly prohibited modifications of the symbol table.
It does work though, and has been tested under extremely basic circumstances.
Solution
class SomeClass():
def __init__(self, a, b, c):
vars(self).update(dict((k,v) for k,v in vars().iteritems() if (k != 'self')))
sc = SomeClass(1, 2, 3)
# sc.a == 1
# sc.b == 2
# sc.c == 3
Using the vars() built-in function, this snippet iterates through all of the variables available in the __init__ method (which should, at this point, just be self, a, b, and c) and set's self's variables equal to the same, obviously ignoring the argument-reference to self (because self.self seemed like a poor decision.)
One of the problems with #user3638162's answer is that locals() contain the 'self' variable. Hence, you end up with an extra self.self. If one doesn't mind the extra self, that solution can simply be
class X:
def __init__(self, a, b, c):
self.__dict__.update(locals())
x = X(1, 2, 3)
print(x.a, x.__dict__)
The self can be removed after construction by del self.__dict__['self']
Alternatively, one can remove the self during construction using dictionary comprehensions introduced in Python3
class X:
def __init__(self, a, b, c):
self.__dict__.update(l for l in locals().items() if l[0] != 'self')
x = X(1, 2, 3)
print(x.a, x.__dict__)

Categories