Is there a way to enable using the standard type constructors such as int, set, dict, list, tuple, etc. to coerce an instance of a user-defined class to one of those types in a user-defined way? For example
class Example:
def __init__(self):
self.a=1
self.b=2
and then having
>>> ex = Example()
>>> dict(ex)
{"a":1, "b":2}
I don't know if that's possible, and if it is, what I would need to add in the class definition. Right now I need this and I implement a "as_dict" method which I call on the object, but it doesn't look as natural.
Make your type iterable by adding an __iter__() method. Trivially:
class Example:
def __init__(self):
self.a = 1
self.b = 2
def __iter__(self):
yield "a", self.a
yield "b", self.b
This yields a sequence of tuples containing name/value pairs, which dict() is happy to consume.
dict(Example()) # {'a': 1, 'b': 2}
Of course, there's a lot of repeating yourself in there. So you could instead write __iter__() to work with a predefined list of attributes:
def __iter__(self):
names = "a", "b"
for name in names:
yield name, getattr(self, name)
You could also have it introspect all the attributes from the instance, omitting attributes whose values are callable:
def __iter__(self):
names = dir(self)
for name in names:
value = getattr(self, name)
if not callable(value):
yield name, value
Or have it yield from the instance's __dict__ attribute, which contains only the attributes stored directly on the instance (the dir() method above also finds inherited attributes):
def __iter__(self):
yield from self.__dict__.items()
You need to make your object iterable.
Both list and tuple accept an iterable as their argument, which they will repeatedly consume until to construct the new collection. To enable your class to work with this mechanism, you will need to define at least the __iter__ method, and possibly also the __next__ method, depending on the specific semantics of your class.
Your implementation of __iter__ will need to return an object that implentents the iterator protocol. If you internally use an iterable collection, this can be as simple as returning that collection.
In your case, it seems you want your object to behavior like an iterable collection of tuples. A possible implementation for the behavior you're looking for would be
class Example:
def __init__(self):
self.a=1
self.b=2
def __iter__(self):
for elem in ('a', 'b'):
yield (elem, getattr(self, elem))
>>> dict(Example())
{'a': 1, 'b': 2}
Here, we make use of a generator to produce an iterable that will yield the tuples ('a', self.a) and ('b', self.b) in turn.
Related
Is there a way to enable using the standard type constructors such as int, set, dict, list, tuple, etc. to coerce an instance of a user-defined class to one of those types in a user-defined way? For example
class Example:
def __init__(self):
self.a=1
self.b=2
and then having
>>> ex = Example()
>>> dict(ex)
{"a":1, "b":2}
I don't know if that's possible, and if it is, what I would need to add in the class definition. Right now I need this and I implement a "as_dict" method which I call on the object, but it doesn't look as natural.
Make your type iterable by adding an __iter__() method. Trivially:
class Example:
def __init__(self):
self.a = 1
self.b = 2
def __iter__(self):
yield "a", self.a
yield "b", self.b
This yields a sequence of tuples containing name/value pairs, which dict() is happy to consume.
dict(Example()) # {'a': 1, 'b': 2}
Of course, there's a lot of repeating yourself in there. So you could instead write __iter__() to work with a predefined list of attributes:
def __iter__(self):
names = "a", "b"
for name in names:
yield name, getattr(self, name)
You could also have it introspect all the attributes from the instance, omitting attributes whose values are callable:
def __iter__(self):
names = dir(self)
for name in names:
value = getattr(self, name)
if not callable(value):
yield name, value
Or have it yield from the instance's __dict__ attribute, which contains only the attributes stored directly on the instance (the dir() method above also finds inherited attributes):
def __iter__(self):
yield from self.__dict__.items()
You need to make your object iterable.
Both list and tuple accept an iterable as their argument, which they will repeatedly consume until to construct the new collection. To enable your class to work with this mechanism, you will need to define at least the __iter__ method, and possibly also the __next__ method, depending on the specific semantics of your class.
Your implementation of __iter__ will need to return an object that implentents the iterator protocol. If you internally use an iterable collection, this can be as simple as returning that collection.
In your case, it seems you want your object to behavior like an iterable collection of tuples. A possible implementation for the behavior you're looking for would be
class Example:
def __init__(self):
self.a=1
self.b=2
def __iter__(self):
for elem in ('a', 'b'):
yield (elem, getattr(self, elem))
>>> dict(Example())
{'a': 1, 'b': 2}
Here, we make use of a generator to produce an iterable that will yield the tuples ('a', self.a) and ('b', self.b) in turn.
class Main(object):
def __init__(self, config):
selt.attributes = config
def return_new_copy(self, additional_attributes):
addtional_attributes.update(self.attributes)
return Main(additional_attributes)
I want to update the instance attributes and return a new instance of the same class. I guess I am trying to find out if the above code is Pythonic or if it's a dirty approach. I can't use classmethod for several reasons not mentioned here. Is there another recommended approach.
Your return_new_copy modifies the parameter passed in which is probably undesirable. It also overrides in the wrong direction (giving precedence to self.attributes)
I'd write it as follows:
def return_new_copy(self, additional_attributes):
# python<3.5 if there are only string keys:
# attributes = dict(self.attributes, **additional_attributes)
# python<3.5 if there are non-string keys:
# attributes = self.attributes.copy()
# attributes.update(additional_attributes)
# python3.5+
attributes = {**self.attributes, **additional_attributes}
return type(self)(attributes)
A few subtleties:
- I make sure to copy both the input attributes and the self attributes
- I merge the additional attributes on top of the self attributes
If you're looking for something to do this automatically, you might want to check out namedtuple
For example:
>>> C = collections.namedtuple('C', ('a', 'b'))
>>> x = C(1, 2)
>>> x
C(a=1, b=2)
>>> y = x._replace(b=3)
>>> y
C(a=1, b=3)
>>> x
C(a=1, b=2)
Take this super simple class:
class Foo():
def __init__(self, iden):
self.iden = iden
def __hash__(self):
return hash(self.iden)
def __repr__(self):
return str(self.iden)
The goal is to create instances of the class to use as dict keys. If __repr__ is omitted, the keys are the standard object address. With __repr__ a printable representation might be:
f = Foo(1)
g = Foo(2)
d = {f:'a', g:'b'}
print(d)
>>> {1:'a', 2:'b'}
When attempting to access the dict by key though, it does not appear to be immediately obvious how to utilize the __repr__ (or __str__ for that matter) representation as the key.
print(d[1])
>>> KeyError
First thing's first: __repr__() is a red herring. It only affects how the object is displayed. It has nothing to do with what you're trying to do.
If you want to have two separate objects refer to the same slot in a dict, you need two things (reference):
The objects must have the same hash (hash(obj1) == hash(obj2)).
The objects must compare equal (obj1 == obj2).
Your above implementation does the former, but not the latter. You need to add an __eq__() method (which is actually required by the documentation when you define __hash__(), anyway).
class Foo():
def __init__(self, iden):
self.iden = iden
def __hash__(self):
return hash(self.iden)
def __eq__(self, other):
return self.iden == other
>>> d = {Foo(1) : 'a'}
>>> d[1]
'a'
Subclassing frozenset and set doesn't seem to work the same when it comes to iterables. Try to run the following MWE:
class MonFrozenSet(frozenset):
def __new__(self, data):
super(MonFrozenSet,self).__init__(data)
return self
class MonSet(set):
def __init__(self, data):
super(MonSet,self).__init__(data)
x=(1,2,3,4)
A=MonSet(x)
B=MonFrozenSet(x)
for y in A: #Works
print y
for y in B: #Doesn't work
print y
The second for returns:
for y in B:
TypeError: 'type' object is not iterable
Any idea on how I can solve this?
If you are asking yourselves why I would like to use frozenset, the anwer is that I am trying to create a set of sets of tuples. The sets of tuples will be frozenset and the set of sets of tuples will be a set.
I use Python-2.7
When overriding __new__ you need to call the superclass's __new__, not its __init__. Also, you need to pass self (better named cls), since __new__ is a classmethod. Also, you need to return the result, since __new__ actually creates an object, it doesn't modify self. So:
class MonFrozenSet(frozenset):
def __new__(cls, data):
return super(MonFrozenSet,cls).__new__(cls, data)
Then:
>>> a = MonFrozenSet([1, 2, 3])
>>> for item in a:
... print item
1
2
3
I have a class in python, which has an iterable as instance variable. I want to iterate the instances of the class by iterating over the embedded iterable.
I implemented this as follows:
def __iter__(self):
return self._iterable.__iter__()
I don't really feel that comfortable calling the __iter__() method on the iterable, as it is a special method. Is this how you would solve this problem in python or is there a more elegant solution?
The "best" way to way to delegate __iter__ would be:
def __iter__(self):
return iter(self._iterable)
Alternately, it might be worth knowing about:
def __iter__(self):
for item in self._iterable:
yield item
Which will let you fiddle with each item before returning it (ex, if you wanted yield item * 2).
And as #Lattyware mentions in the comments, PEP380 (slated for inclusion in Python 3.3) will allow:
def __iter__(self):
yield from self._iterable
Note that it may be tempting to do something like:
def __init__(self, iterable):
self.__iter__ = iterable.__iter__
But this won't work: iter(foo) calls the __iter__ method on type(foo) directly, bypassing foo.__iter__. Consider, for example:
class SurprisingIter(object):
def __init__(self):
self.__iter__ = lambda self: iter("abc")
def __iter__(self):
return iter([1, 2, 3])
You would expect that list(SurprisingIter()) would return ["a", "b", "c"], but it actually returns [1, 2, 3].