Related
I am trying to implement an iterable proxy for a web resource (lazily fetched images).
Firstly, I did (returning ids, in production those will be image buffers)
def iter(ids=[1,2,3]):
for id in ids:
yield id
and that worked nicely, but now I need to keep state.
I read the four ways to define iterators. I judged that the iterator protocol is the way to go. Follow my attempt and failure to implement that.
class Test:
def __init__(self, ids):
self.ids = ids
def __iter__(self):
return self
def __next__(self):
for id in self.ids:
yield id
raise StopIteration
test = Test([1,2,3])
for t in test:
print('new value', t)
Output:
new value <generator object Test.__next__ at 0x7f9c46ed1750>
new value <generator object Test.__next__ at 0x7f9c46ed1660>
new value <generator object Test.__next__ at 0x7f9c46ed1750>
new value <generator object Test.__next__ at 0x7f9c46ed1660>
new value <generator object Test.__next__ at 0x7f9c46ed1750>
forever.
What's wrong?
Thanks to absolutely everyone! It's all new to me, but I'm learning new cool stuff.
Your __next__ method uses yield, which makes it a generator function. Generator functions return a new iterator when called.
But the __next__ method is part of the iterator interface. It should not itself be an iterator. __next__ should return the next value, not something that returns all values(*).
Because you wanted to create an iterable, you can just make __iter__ the generator here:
class Test:
def __init__(self, ids):
self.ids = ids
def __iter__(self):
for id in self.ids:
yield id
Note that a generator function should not use raise StopIteration, just returning from the function does that for you.
The above class is an iterable. Iterables only have an __iter__ method, and no __next__ method. Iterables produce an iterator when __iter__ is called:
Iterable -> (call __iter__) -> Iterator
In the above example, because Test.__iter__ is a generator function, it creates a new object each time we call it:
>>> test = Test([1,2,3])
>>> test.__iter__() # create an iterator
<generator object Test.__iter__ at 0x111e85660>
>>> test.__iter__()
<generator object Test.__iter__ at 0x111e85740>
A generator object is a specific kind of iterator, one created by calling a generator function, or by using a generator expression. Note that the hex values in the representations differ, two different objects were created for the two calls. This is by design! Iterables produce iterators, and can create more at will. This lets you loop over them independently:
>>> test_it1 = test.__iter__()
>>> test_it1.__next__()
1
>>> test_it2 = test.__iter__()
>>> test_it2.__next__()
1
>>> test_it1.__next__()
2
Note that I called __next__() on the object returned by test.__iter__(), the iterator, not on test itself, which doesn't have that method because it is only an iterable, not an iterator.
Iterators also have an __iter__ method, which always must return self, because they are their own iterators. It is the __next__ method that makes them an iterator, and the job of __next__ is to be called repeatedly, until it raises StopIteration. Until StopIteration is raised, each call should return the next value. Once an iterator is done (has raised StopIteration), it is meant to then always raise StopIteration. Iterators can only be used once, unless they are infinite (never raise StopIteration and just keep producing values each time __next__ is called).
So this is an iterator:
class IteratorTest:
def __init__(self, ids):
self.ids = ids
self.nextpos = 0
def __iter__(self):
return self
def __next__(self):
if self.ids is None or self.nextpos >= len(self.ids):
# we are done
self.ids = None
raise StopIteration
value = self.ids[self.nextpos]
self.nextpos += 1
return value
This has to do a bit more work; it has to keep track of what the next value to produce would be, and if we have raised StopIteration yet. Other answerers here have used what appear to be simpler ways, but those actually involve letting something else do all the hard work. When you use iter(self.ids) or (i for i in ids) you are creating a different iterator to delegate __next__ calls to. That's cheating a bit, hiding the state of the iterator inside ready-made standard library objects.
You don't usually see anything calling __iter__ or __next__ in Python code, because those two methods are just the hooks that you can implement in your Python classes; if you were to implement an iterator in the C API then the hook names are slightly different. Instead, you either use the iter() and next() functions, or just use the object in syntax or a function call that accepts an iterable.
The for loop is such syntax. When you use a for loop, Python uses the (moral equivalent) of calling __iter__() on the object, then __next__() on the resulting iterator object to get each value. You can see this if you disassemble the Python bytecode:
>>> from dis import dis
>>> dis("for t in test: pass")
1 0 LOAD_NAME 0 (test)
2 GET_ITER
>> 4 FOR_ITER 4 (to 10)
6 STORE_NAME 1 (t)
8 JUMP_ABSOLUTE 4
>> 10 LOAD_CONST 0 (None)
12 RETURN_VALUE
The GET_ITER opcode at position 2 calls test.__iter__(), and FOR_ITER uses __next__ on the resulting iterator to keep looping (executing STORE_NAME to set t to the next value, then jumping back to position 4), until StopIteration is raised. Once that happens, it'll jump to position 10 to end the loop.
If you want to play more with the difference between iterators and iterables, take a look at the Python standard types and see what happens when you use iter() and next() on them. Like lists or tuples:
>>> foo = (42, 81, 17, 111)
>>> next(foo) # foo is a tuple, not an iterator
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'tuple' object is not an iterator
>>> t_it = iter(foo) # so use iter() to create one from the tuple
>>> t_it # here is an iterator object for our foo tuple
<tuple_iterator object at 0x111e9af70>
>>> iter(t_it) # it returns itself
<tuple_iterator object at 0x111e9af70>
>>> iter(t_it) is t_it # really, it returns itself, not a new object
True
>>> next(t_it) # we can get values from it, one by one
42
>>> next(t_it) # another one
81
>>> next(t_it) # yet another one
17
>>> next(t_it) # this is getting boring..
111
>>> next(t_it) # and now we are done
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
>>> next(t_it) # an *stay* done
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
>>> foo # but foo itself is still there
(42, 81, 17, 111)
You could make Test, the iterable, return a custom iterator class instance too (and not cop out by having generator function create the iterator for us):
class Test:
def __init__(self, ids):
self.ids = ids
def __iter__(self):
return TestIterator(self)
class TestIterator:
def __init__(self, test):
self.test = test
def __iter__(self):
return self
def __next__(self):
if self.test is None or self.nextpos >= len(self.test.ids):
# we are done
self.test = None
raise StopIteration
value = self.test.ids[self.nextpos]
self.nextpos += 1
return value
That's a lot like the original IteratorTest class above, but TestIterator keeps a reference to the Test instance. That's really how tuple_iterator works too.
A brief, final note on naming conventions here: I am sticking with using self for the first argument to methods, so the bound instance. Using different names for that argument only serves to make it harder to talk about your code with other, experienced Python developers. Don't use me, however cute or short it may seem.
(*) Unless your goal was to create an iterator of iterators, of course (which is basically what the itertools.groupby() iterator does, it is an iterator producing (object, group_iterator) tuples, but I digress).
It is unclear to me exactly what you are trying to achieve, but if you really want to use your instance attributes like this, you can convert the input to a generator and then iterate it as such. But, as I said, this feels odd and I don't think you'd actually want a setup like this.
class Test:
def __init__(self, ids):
self.ids = iter(ids)
def __iter__(self):
return self
def __next__(self):
return next(self.ids)
test = Test([1,2,3])
for t in test:
print('new value', t)
The simplest solution is to use __iter__ and return an iterator to the main list:
class Test:
def __init__(self, ids):
self.ids = ids
def __iter__(self):
return iter(self.ids)
test = Test([1,2,3])
for t in test:
print('new value', t)
As the update, for lazily loading you can return an iterator to a generator:
def __iter__(self):
return iter(load_file(id) for id in self.ids)
The __next__ function is supposed to return the next value provided by an iterator. Since you have used yield in your implementation, the function returns a generator, which is what you get.
You need to make clear whether you want Test to be an iterable or an iterator. If it is an iterable, it will have the ability to provide an iterator with __iter__. If it is an iterator, it will have the ability to provide new elements with __next__. Iterators can typically work as iterables by returning themselves in __iter__. Martijn's answer shows what you probably want. However, if you want an example of how you could specifically implement __next__ (by making Test explicitly an iterator), it could be something like this:
class Test:
def __init__(self, ids):
self.ids = ids
self.idx = 0
def __iter__(self):
return self
def __next__(self):
if self.idx >= len(self.ids):
raise StopIteration
else:
self.idx += 1
return self.ids[self.idx - 1]
test = Test([1,2,3])
for t in test:
print('new value', t)
I'm trying to write a custom wrapper class for containers. To implement the iterator-prototocol I provide __iter__ and __next__ and to access individual items I provide __getitem__:
#!/usr/bin/python
# -*- coding: utf-8 -*-
from __future__ import absolute_import, division, print_function, unicode_literals, with_statement
from future_builtins import *
import numpy as np
class MyContainer(object):
def __init__(self, value):
self._position = 0
self.value = value
def __str__(self):
return str(self.value)
def __len__(self):
return len(self.value)
def __getitem__(self, key):
return self.value[key]
def __iter__(self):
return self
def next(self):
if (self._position >= len(self.value)):
raise StopIteration
else:
self._position += 1
return self.value[self._position - 1]
So far, everything works as expected, e. g. when trying things like:
if __name__ == '__main__':
a = MyContainer([1,2,3,4,5])
print(a)
iter(a) is iter(a)
for i in a:
print(i)
print(a[2])
But I run into problems when trying to use numpy.maximum:
b= MyContainer([2,3,4,5,6])
np.maximum(a,b)
Raises "ValueError: cannot copy sequence with size 5 to array axis with dimension 0".
When commenting out the __iter__ method, I get back a NumPy array with the correct results (while no longer conforming to the iterator protocol):
print(np.maximum(a,b)) # results in [2 3 4 5 6]
And when commenting out __getitem__, I get back an instance of MyContainer
print(np.maximum(a,b)) # results in [2 3 4 5 6]
But I lose the access to individual items.
Is there any way to achieve all three goals together (Iterator-Protocol, __getitem__ and numpy.maximum working)? Is there anything I'm doing fundamentally wrong?
To note: The actual wrapper class has more functionality but this is the minimal example where I could reproduce the behaviour.
(Python 2.7.12, NumPy 1.11.1)
Your container is its own iterator, which limits it greatly. You can only iterate each container once, after that, it is considered "empty" as far as the iteration protocol goes.
Try this with your code to see:
c = MyContainer([1,2,3])
l1 = list(c) # the list constructor will call iter on its argument, then consume the iterator
l2 = list(c) # this one will be empty, since the container has no more items to iterate on
When you don't provide an __iter__ method but do implement a __len__ methond and a __getitem__ method that accepts small integer indexes, Python will use __getitem__ to iterate. Such iteration can be done multiple times, since the iterator objects that are created are all distinct from each other.
If you try the above code after taking the __iter__ method out of your class, both lists will be [1, 2, 3], as expected. You could also fix up your own __iter__ method so that it returns independent iterators. For instance, you could return an iterator from your internal sequence:
def __iter__(self):
return iter(self.value)
Or, as suggested in a comment by Bakuriu:
def __iter__(self):
return (self[i] for i in range(len(self))
This latter version is essentially what Python will provide for you if you have a __getitem__ method but no __iter__ method.
The main function of the class is a dictionary with words as keys and id numbers as values (note: id is not in sequential because some of the entries are removed):
x = {'foo':0, 'bar':1, 'king':3}
When i wrote the iterator function for a customdict class i created, it breaks when iterating through range(1 to infinity) because of a KeyError.
class customdict():
def __init__(self,dic):
self.cdict = dic
self.inverse = {}
def keys(self):
# this is necessary when i try to overload the UserDict.Mixin
return self.cdict.values()
def __getitem__(self, valueid):
""" Iterator function of the inversed dictionary """
if self.inverse == {}:
self.inverse = {v:k for k,v in self.cdict.items()}
return self.inverse[valueid]
x = {'foo':0, 'bar':1, 'king':3}
y = customdict(x)
for i in y:
print i
Without try and except and accessing the len(x), how could I resolve the iteration of the dictionary within the customdict class? Reason being x is >>>, len(x) will take too long for realtime.
I've tried UserDict.DictMixin and suddenly it works, why is that so?:
import UserDict.DictMixin
class customdict(UserDict.DictMixin):
...
Is there a way so that i don't use Mixin because somehow in __future__ and python3, mixins looks like it's deprecated?
Define following method.
def __iter__(self):
for k in self.keys():
yield k
I've tried UserDict.DictMixin and suddenly it works, why is that so?:
Because DictMixin define above __iter__ method for you.
(UserDict.py source code.)
Just share another way:
class customdict(dict):
def __init__(self,dic):
dict.__init__(self,{v:k for k,v in dic.items()})
x = {'foo':0, 'bar':1, 'king':3}
y = customdict(x)
for i in y:
print i,y[i]
result:
0 foo
1 bar
3 king
def __iter__(self):
return iter(self.cdict.itervalues())
In Python3 you'd call values() instead.
You're correct that UserDict.DictMixin is out of date, but it's not the fact that it's a mixin that's the problem, it's the fact that collections.Mapping and collections.MutableMapping use a more sensible underlying interface. So if you want to update from UserDict.DictMixin, you should switch to collections.Mapping and implement __iter__() and __len__() instead of keys().
In Python 3, how can I check whether an object is a container (rather than an iterator that may allow only one pass)?
Here's an example:
def renormalize(cont):
'''
each value from the original container is scaled by the same factor
such that their total becomes 1.0
'''
total = sum(cont)
for v in cont:
yield v/total
list(renormalize(range(5))) # [0.0, 0.1, 0.2, 0.3, 0.4]
list(renormalize(k for k in range(5))) # [] - a bug!
Obviously, when the renormalize function receives a generator expression, it does not work as intended. It assumes it can iterate through the container multiple times, while the generator allows only one pass through it.
Ideally, I'd like to do this:
def renormalize(cont):
if not is_container(cont):
raise ContainerExpectedException
# ...
How can I implement is_container?
I suppose I could check if the argument is empty right as we're starting to do the second pass through it. But this approach doesn't work for more complicated functions where it's not obvious when exactly the second pass starts. Furthermore, I'd rather put the validation at the function entrance, rather than deep inside the function (and shift it around whenever the function is modified).
I can of course rewrite the renormalize function to work correctly with a one-pass iterator. But that require copying the input data to a container. The performance impact of copying millions of large lists "just in case they are not lists" is ridiculous.
EDIT: My original example used a weighted_average function:
def weighted_average(c):
'''
returns weighted average of a container c
c contains values and weights in tuples
weights don't need to sum up 1 (automatically renormalized)
'''
return sum((v * w for v, w in c)) / sum((w for v, w in c))
weighted_average([(0,1), (1,1)]) #0.5
weighted_average([(k, 1) for k in range(2)]) #0.5
weighted_average((k, 1) for k in range(2)) #mistake
But it was not the best example since the version of weighted_average rewritten to use a single pass is arguably better anyway:
def weighted_average(it):
'''
returns weighted average of an iterator it
it yields values and weights in tuples
weights don't need to sum up 1 (automatically renormalized)
'''
total_value = 0
total_weight = 0
for v, w in it:
total_value += v
total_weight += w
return total_value / total_weight
Although all iterables should subclass collections.Iterable, not all of them do, unfortunately. Here is an answer based on what interface the objects implement, instead of what they "declare".
Short answer:
A "container" as you call it, ie a list/tuple that can be iterated over more than once as opposed to being a generator that will be exhausted, will typically implement both __iter__ and __getitem__. Hence you can do this:
>>> def is_container_iterable(o):
... return hasattr(o, '__iter__') and hasattr(o, '__getitem__')
...
>>> is_container_iterable([])
True
>>> is_container_iterable(())
True
>>> is_container_iterable({})
True
>>> is_container_iterable(range(5))
True
>>> is_container_iterable(iter([]))
False
Long answer:
However, you can make an iterable that will not be exhausted and do not support getitem. For example, a function that generates prime-numbers. You could repeat the generation many times if you want, but having a function to retrieve the 1065th prime would take a lot of calculation, so you may not want to support that. :-)
So is there any more "reliable" way?
Well, all iterables will implement an __iter__ function that will return an iterator. The iterators will have a __next__ function. This is what is used when iterating over it. Calling __next__ repeatedly will in the end exhaust the iterator.
So if it has a __next__ function it is an iterator, and will be exhausted.
>>> def foo():
... for x in range(5):
... yield x
...
>>> f = foo()
>>> f.__next__
<method-wrapper '__next__' of generator object at 0xb73c02d4>
Iterables that are not yet iterators will not have a __next__ function, but will implement a __iter__ function, that will return an iterable:
>>> r = range(5)
>>> r.__next__
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'range' object has no attribute '__next__'
>>> ri = iter(r)
>>> ri.__next__
<method-wrapper '__next__' of range_iterator object at 0xb73bef80>
So you can check that the object has __iter__ but that it does not have __next__.
>>> def is_container_iterable(o):
... return hasattr(o, '__iter__') and not hasattr(o, '__next__')
...
>>> is_container_iterable(())
True
>>> is_container_iterable([])
True
>>> is_container_iterable({})
True
>>> is_container_iterable(range(5))
True
>>> is_container_iterable(iter(range(5)))
False
Iterators also has an __iter__ function, that will return self.
>>> iter(f) is f
True
>>> iter(r) is r
False
>>> iter(ri) is ri
True
Hence, you can do these variations of the checking:
>>> def is_container_iterable(o):
... return iter(o) is not o
...
>>> is_container_iterable([])
True
>>> is_container_iterable(())
True
>>> is_container_iterable({})
True
>>> is_container_iterable(range(5))
True
>>> is_container_iterable(iter([]))
False
That would fail if you implement an object that returns a broken iterator, one that does not return self when you call iter() on it again. But then your (or a third-party modules) code is actually doing things wrong.
It does depends on making an iterator though, and hence calling the objects __iter__, which in theory may have side-effects, while the above hasattr calls should not have side effects. OK, so it calls getattribute which could have. But you can fix that thusly:
>>> def is_container_iterable(o):
... try:
... object.__getattribute__(o, '__iter__')
... except AttributeError:
... return False
... try:
... object.__getattribute__(o, '__next__')
... except AttributeError:
... return True
... return False
...
>>> is_container_iterable([])
True
>>> is_container_iterable(())
True
>>> is_container_iterable({})
True
>>> is_container_iterable(range(5))
True
>>> is_container_iterable(iter(range(5)))
False
This one is reasonably safe, and should work in all cases except if the object generates __next__ or __iter__ dynamically on __getattribute__ calls, but if you do that you are insane. :-)
Instinctively my preferred version would be iter(o) is o, but I haven't ever needed to do this, so that's not based on experience.
You could use the abstract base classes defined in the collections module to check and see if it is an instance of collections.Iterator.
if isinstance(it, collections.Iterator):
# handle the iterator case
Personally though I find your iterator friendly version of weighted average far easier to read than the multiple list comprehension / sum version. :-)
The best way would be to use the abstract base class infrastructure:
def weighted_average(c):
if not isinstance(c, collections.Sequence):
raise ContainerExpectedException
I've noticed that a common pattern I use is to assign SomeClass.__init__() arguments to self attributes of the same name. Example:
class SomeClass():
def __init__(self, a, b, c):
self.a = a
self.b = b
self.c = c
In fact it must be a common task for others as well as PyDev has a shortcut for this - if you place the cursor on the parameter list and click Ctrl+1 you're given the option to Assign parameters to attributes which will create that boilerplate code for you.
Is there a different, short and elegant way to perform this assignment?
You could do this, which has the virtue of simplicity:
>>> class C(object):
def __init__(self, **kwargs):
self.__dict__ = dict(kwargs)
This leaves it up to whatever code creates an instance of C to decide what the instance's attributes will be after construction, e.g.:
>>> c = C(a='a', b='b', c='c')
>>> c.a, c.b, c.c
('a', 'b', 'c')
If you want all C objects to have a, b, and c attributes, this approach won't be useful.
(BTW, this pattern comes from Guido his own bad self, as a general solution to the problem of defining enums in Python. Create a class like the above called Enum, and then you can write code like Colors = Enum(Red=0, Green=1, Blue=2), and henceforth use Colors.Red, Colors.Green, and Colors.Blue.)
It's a worthwhile exercise to figure out what kinds of problems you could have if you set self.__dict__ to kwargs instead of dict(kwargs).
I sympathize with your sense that boilerplate code is a bad thing. But in this case, I'm not sure there even could be a better alternative. Let's consider the possibilities.
If you're talking about just a few variables, then a series of self.x = x lines is easy to read. In fact, I think its explicitness makes that approach preferable from a readability standpoint. And while it might be a slight pain to type, that alone isn't quite enough to justify a new language construct that might obscure what's really going on. Certainly using vars(self).update() shenanigans would be more confusing than it's worth in this case.
On the other hand, if you're passing nine, ten, or more parameters to __init__, you probably need to refactor anyway. So this concern really only applies to cases that involve passing, say, 5-8 parameters. Now I can see how eight lines of self.x = x would be annoying both to type and to read; but I'm not sure that the 5-8 parameter case is common enough or troublesome enough to justify using a different method. So I think that, while the concern you're raising is a good one in principle, in practice, there are other limiting issues that make it irrelevant.
To make this point more concrete, let's consider a function that takes an object, a dict, and a list of names, and assigns values from the dict to names from the list. This ensures that you're still being explicit about which variables are being assigned to self. (I would never suggest a solution to this problem that didn't call for an explicit enumeration of the variables to be assigned; that would be a rare-earth bug magnet):
>>> def assign_attributes(obj, localdict, names):
... for name in names:
... setattr(obj, name, localdict[name])
...
>>> class SomeClass():
... def __init__(self, a, b, c):
... assign_attributes(self, vars(), ['a', 'b', 'c'])
Now, while not horribly unattractive, this is still harder to figure out than a straightforward series of self.x = x lines. And it's also longer and more trouble to type than one, two, and maybe even three or four lines, depending on circumstances. So you only get certain payoff starting with the five-parameter case. But that's also the exact moment that you begin to approach the limit on human short-term memory capacity (= 7 +/- 2 "chunks"). So in this case, your code is already a bit challenging to read, and this would only make it more challenging.
Mod for #pcperini's answer:
>>> class SomeClass():
def __init__(self, a, b=1, c=2):
for name,value in vars().items():
if name != 'self':
setattr(self,name,value)
>>> s = SomeClass(7,8)
>>> print s.a,s.b,s.c
7 8 2
Your specific case could also be handled with a namedtuple:
>>> from collections import namedtuple
>>> SomeClass = namedtuple("SomeClass", "a b c")
>>> sc = SomeClass(1, "x", 200)
>>> print sc
SomeClass(a=1, b='x', c=200)
>>> print sc.a, sc.b, sc.c
1 x 200
Decorator magic!!
>>> class SomeClass():
#ArgsToSelf
def __init__(a, b=1, c=2, d=4, e=5):
pass
>>> s=SomeClass(6,b=7,d=8)
>>> print s.a,s.b,s.c,s.d,s.e
6 7 2 8 5
while defining:
>>> import inspect
>>> def ArgsToSelf(f):
def act(self, *args, **kwargs):
arg_names,_,_,defaults = inspect.getargspec(f)
defaults=list(defaults)
for arg in args:
setattr(self, arg_names.pop(0),arg)
for arg_name,arg in kwargs.iteritems():
setattr(self, arg_name,arg)
defaults.pop(arg_names.index(arg_name))
arg_names.remove(arg_name)
for arg_name,arg in zip(arg_names,defaults):
setattr(self, arg_name,arg)
return f(*args, **kwargs)
return act
Of course you could define this decorator once and use it throughout your project.Also, This decorator works on any object function, not only __init__.
You can do it via setattr(), like:
[setattr(self, key, value) for key, value in kwargs.items()]
Is not very beautiful, but can save some space :)
So, you'll get:
kwargs = { 'd':1, 'e': 2, 'z': 3, }
class P():
def __init__(self, **kwargs):
[setattr(self, key, value) for key, value in kwargs.items()]
x = P(**kwargs)
dir(x)
['__doc__', '__init__', '__module__', 'd', 'e', 'z']
For that simple use-case I must say I like putting things explicitly (using the Ctrl+1 from PyDev), but sometimes I also end up using a bunch implementation, but with a class where the accepted attributes are created from attributes pre-declared in the class, so that I know what's expected (and I like it more than a namedtuple as I find it more readable -- and it won't confuse static code analysis or code-completion).
I've put on a recipe for it at: http://code.activestate.com/recipes/577999-bunch-class-created-from-attributes-in-class/
The basic idea is that you declare your class as a subclass of Bunch and it'll create those attributes in the instance (either from default or from values passed in the constructor):
class Point(Bunch):
x = 0
y = 0
p0 = Point()
assert p0.x == 0
assert p0.y == 0
p1 = Point(x=10, y=20)
assert p1.x == 10
assert p1.y == 20
Also, Alex Martelli also provided a bunch implementation: http://code.activestate.com/recipes/52308-the-simple-but-handy-collector-of-a-bunch-of-named/ with the idea of updating the instance from the arguments, but that'll confuse static code-analysis (and IMO can make things harder to follow) so, I'd only use that approach for an instance that's created locally and thrown away inside that same scope without passing it anywhere else).
I solved it for myself using locals() and __dict__:
>>> class Test:
... def __init__(self, a, b, c):
... l = locals()
... for key in l:
... self.__dict__[key] = l[key]
...
>>> t = Test(1, 2, 3)
>>> t.a
1
>>>
Disclaimer
Do not use this: I was simply trying to create the answer closest to OPs initial intentions. As pointed out in comments, this relies on entirely undefined behavior, and explicitly prohibited modifications of the symbol table.
It does work though, and has been tested under extremely basic circumstances.
Solution
class SomeClass():
def __init__(self, a, b, c):
vars(self).update(dict((k,v) for k,v in vars().iteritems() if (k != 'self')))
sc = SomeClass(1, 2, 3)
# sc.a == 1
# sc.b == 2
# sc.c == 3
Using the vars() built-in function, this snippet iterates through all of the variables available in the __init__ method (which should, at this point, just be self, a, b, and c) and set's self's variables equal to the same, obviously ignoring the argument-reference to self (because self.self seemed like a poor decision.)
One of the problems with #user3638162's answer is that locals() contain the 'self' variable. Hence, you end up with an extra self.self. If one doesn't mind the extra self, that solution can simply be
class X:
def __init__(self, a, b, c):
self.__dict__.update(locals())
x = X(1, 2, 3)
print(x.a, x.__dict__)
The self can be removed after construction by del self.__dict__['self']
Alternatively, one can remove the self during construction using dictionary comprehensions introduced in Python3
class X:
def __init__(self, a, b, c):
self.__dict__.update(l for l in locals().items() if l[0] != 'self')
x = X(1, 2, 3)
print(x.a, x.__dict__)