Object directing to a property when accessed as an iterable - python

I'm trying to figure out if there's an elegant and concise way to have a class accessing one of its own properties when "used" as a dictionary, basically redirecting all the methods that'd be implemented in an ordered dictionary to one of its properties.
Currently I'm inheriting from IterableUserDict and explicitly setting its data to another property, and it seems to be working, but I know that UserDict is considered sort of old, and I'm concerned I might be overlooking something.
What I have:
class ConnectionInterface(IterableUserDict):
def __init__(self, hostObject):
self._hostObject= hostObject
self.ports= odict.OrderedDict()
self.inputPorts= odict.OrderedDict()
self.outputPorts= odict.OrderedDict()
self.data= self.ports
This way I expect the object to behave and respond (and be used) the way I mean it to, except I want to get a freebie ordered dictionary behaviour on its property "ports" when it's iterated, items are gotten by key, something is looked up ala if this in myObject, and so on.
Any advice welcome, the above seems to be working fine, but I have an odd itch that I might be missing something.
Thanks in advance.

In the end inheriting IterableUserDict and setting self.data explicitly worked out to what I needed and hasn't had any unforeseen consequences or added dodgyness when serialising and deserialising.
Sticking to my original solution I guess and can recommend it if anybody needs a simple and full fledged dict like behaviour on a selected subset of data in their own objects.
It's fairly simple and doesn't have particularly strict scalability or complexity requirements stressing it though.

Sure, you can do this. The primary thing with dictionaries is the getattr and setattr methods, so you can implement the magic methods __getattr__ and __setattr__ something like this:
def __getattr__(self, key):
return self.ports[key]
def __setattr__(self, key, value):
self.ports[key] = value
If you want implementation for .keys() and .values() and stuff, just write them in this style:
def keys(self):
return self.ports.keys()

Related

Appropriately altering __setattr__ method for mapped objects

So, I have a web application where there's a certain database object with attributes I would like to cache in a redis store. Relatively simple to do manually, with something like below:
db_object.update({<attribute>: <value>})
redis.set(db_object.id, <value>)
The issue here is that it's an attribute that is changed in many places throughout the codebase. Doesn't mean this approach won't work, it just means that it makes for code that is very repetitive. I would much rather just have a wrapper for the cache that I can access directly whenever I need to. This means that any time I change the particular attribute I'm interested in I would like to update my redis store, theoretically like so:
def __setattr__(self, name, value):
self.__dict__[name] = value
if name == <attribute>:
redis.set(self.id, value)
which would solve all my problems. The only issue is that, as detailed here I cannot directly modify the __dict__ in mapped objects. How can I achieve the same effect?
Found a nice way to approach this on another question. Can just call the super method instead of directly altering __dict__
Method below has the desired effect:
def __setattr__(self, name, value):
if name == <attribute>:
redis.set(self.id, value)
super(Dataset, self).__setattr__(name, value)

How dangerous is setting self.__class__ to something else?

Say I have a class, which has a number of subclasses.
I can instantiate the class. I can then set its __class__ attribute to one of the subclasses. I have effectively changed the class type to the type of its subclass, on a live object. I can call methods on it which invoke the subclass's version of those methods.
So, how dangerous is doing this? It seems weird, but is it wrong to do such a thing? Despite the ability to change type at run-time, is this a feature of the language that should completely be avoided? Why or why not?
(Depending on responses, I'll post a more-specific question about what I would like to do, and if there are better alternatives).
Here's a list of things I can think of that make this dangerous, in rough order from worst to least bad:
It's likely to be confusing to someone reading or debugging your code.
You won't have gotten the right __init__ method, so you probably won't have all of the instance variables initialized properly (or even at all).
The differences between 2.x and 3.x are significant enough that it may be painful to port.
There are some edge cases with classmethods, hand-coded descriptors, hooks to the method resolution order, etc., and they're different between classic and new-style classes (and, again, between 2.x and 3.x).
If you use __slots__, all of the classes must have identical slots. (And if you have the compatible but different slots, it may appear to work at first but do horrible things…)
Special method definitions in new-style classes may not change. (In fact, this will work in practice with all current Python implementations, but it's not documented to work, so…)
If you use __new__, things will not work the way you naively expected.
If the classes have different metaclasses, things will get even more confusing.
Meanwhile, in many cases where you'd think this is necessary, there are better options:
Use a factory to create an instance of the appropriate class dynamically, instead of creating a base instance and then munging it into a derived one.
Use __new__ or other mechanisms to hook the construction.
Redesign things so you have a single class with some data-driven behavior, instead of abusing inheritance.
As a very most common specific case of the last one, just put all of the "variable methods" into classes whose instances are kept as a data member of the "parent", rather than into subclasses. Instead of changing self.__class__ = OtherSubclass, just do self.member = OtherSubclass(self). If you really need methods to magically change, automatic forwarding (e.g., via __getattr__) is a much more common and pythonic idiom than changing classes on the fly.
Assigning the __class__ attribute is useful if you have a long time running application and you need to replace an old version of some object by a newer version of the same class without loss of data, e.g. after some reload(mymodule) and without reload of unchanged modules. Other example is if you implement persistency - something similar to pickle.load.
All other usage is discouraged, especially if you can write the complete code before starting the application.
On arbitrary classes, this is extremely unlikely to work, and is very fragile even if it does. It's basically the same thing as pulling the underlying function objects out of the methods of one class, and calling them on objects which are not instances of the original class. Whether or not that will work depends on internal implementation details, and is a form of very tight coupling.
That said, changing the __class__ of objects amongst a set of classes that were particularly designed to be used this way could be perfectly fine. I've been aware that you can do this for a long time, but I've never yet found a use for this technique where a better solution didn't spring to mind at the same time. So if you think you have a use case, go for it. Just be clear in your comments/documentation what is going on. In particular it means that the implementation of all the classes involved have to respect all of their invariants/assumptions/etc, rather than being able to consider each class in isolation, so you'd want to make sure that anyone who works on any of the code involved is aware of this!
Well, not discounting the problems cautioned about at the start. But it can be useful in certain cases.
First of all, the reason I am looking this post up is because I did just this and __slots__ doesn't like it. (yes, my code is a valid use case for slots, this is pure memory optimization) and I was trying to get around a slots issue.
I first saw this in Alex Martelli's Python Cookbook (1st ed). In the 3rd ed, it's recipe 8.19 "Implementing Stateful Objects or State Machine Problems". A fairly knowledgeable source, Python-wise.
Suppose you have an ActiveEnemy object that has different behavior from an InactiveEnemy and you need to switch back and forth quickly between them. Maybe even a DeadEnemy.
If InactiveEnemy was a subclass or a sibling, you could switch class attributes. More exactly, the exact ancestry matters less than the methods and attributes being consistent to code calling it. Think Java interface or, as several people have mentioned, your classes need to be designed with this use in mind.
Now, you still have to manage state transition rules and all sorts of other things. And, yes, if your client code is not expecting this behavior and your instances switch behavior, things will hit the fan.
But I've used this quite successfully on Python 2.x and never had any unusual problems with it. Best done with a common parent and small behavioral differences on subclasses with the same method signatures.
No problems, until my __slots__ issue that's blocking it just now. But slots are a pain in the neck in general.
I would not do this to patch live code. I would also privilege using a factory method to create instances.
But to manage very specific conditions known in advance? Like a state machine that the clients are expected to understand thoroughly? Then it is pretty darn close to magic, with all the risk that comes with it. It's quite elegant.
Python 3 concerns? Test it to see if it works but the Cookbook uses Python 3 print(x) syntax in its example, FWIW.
The other answers have done a good job of discussing the question of why just changing __class__ is likely not an optimal decision.
Below is one example of a way to avoid changing __class__ after instance creation, using __new__. I'm not recommending it, just showing how it could be done, for the sake of completeness. However it is probably best to do this using a boring old factory rather than shoe-horning inheritance into a job for which it was not intended.
class ChildDispatcher:
_subclasses = dict()
def __new__(cls, *args, dispatch_arg, **kwargs):
# dispatch to a registered child class
subcls = cls.getsubcls(dispatch_arg)
return super(ChildDispatcher, subcls).__new__(subcls)
def __init_subclass__(subcls, **kwargs):
super(ChildDispatcher, subcls).__init_subclass__(**kwargs)
# add __new__ contructor to child class based on default first dispatch argument
def __new__(cls, *args, dispatch_arg = subcls.__qualname__, **kwargs):
return super(ChildDispatcher,cls).__new__(cls, *args, **kwargs)
subcls.__new__ = __new__
ChildDispatcher.register_subclass(subcls)
#classmethod
def getsubcls(cls, key):
name = cls.__qualname__
if cls is not ChildDispatcher:
raise AttributeError(f"type object {name!r} has no attribute 'getsubcls'")
try:
return ChildDispatcher._subclasses[key]
except KeyError:
raise KeyError(f"No child class key {key!r} in the "
f"{cls.__qualname__} subclasses registry")
#classmethod
def register_subclass(cls, subcls):
name = subcls.__qualname__
if cls is not ChildDispatcher:
raise AttributeError(f"type object {name!r} has no attribute "
f"'register_subclass'")
if name not in ChildDispatcher._subclasses:
ChildDispatcher._subclasses[name] = subcls
else:
raise KeyError(f"{name} subclass already exists")
class Child(ChildDispatcher): pass
c1 = ChildDispatcher(dispatch_arg = "Child")
assert isinstance(c1, Child)
c2 = Child()
assert isinstance(c2, Child)
How "dangerous" it is depends primarily on what the subclass would have done when initializing the object. It's entirely possible that it would not be properly initialized, having only run the base class's __init__(), and something would fail later because of, say, an uninitialized instance attribute.
Even without that, it seems like bad practice for most use cases. Easier to just instantiate the desired class in the first place.
Here's an example of one way you could do the same thing without changing __class__. Quoting #unutbu in the comments to the question:
Suppose you were modeling cellular automata. Suppose each cell could be in one of say 5 Stages. You could define 5 classes Stage1, Stage2, etc. Suppose each Stage class has multiple methods.
class Stage1(object):
…
class Stage2(object):
…
…
class Cell(object):
def __init__(self):
self.current_stage = Stage1()
def goToStage2(self):
self.current_stage = Stage2()
def __getattr__(self, attr):
return getattr(self.current_stage, attr)
If you allow changing __class__ you could instantly give a cell all the methods of a new stage (same names, but different behavior).
Same for changing current_stage, but this is a perfectly normal and pythonic thing to do, that won't confuse anyone.
Plus, it allows you to not change certain special methods you don't want changed, just by overriding them in Cell.
Plus, it works for data members, class methods, static methods, etc., in ways every intermediate Python programmer already understands.
If you refuse to change __class__, then you might have to include a stage attribute, and use a lot of if statements, or reassign a lot of attributes pointing to different stage's functions
Yes, I've used a stage attribute, but that's not a downside—it's the obvious visible way to keep track of what the current stage is, better for debugging and for readability.
And there's not a single if statement or any attribute reassignment except for the stage attribute.
And this is just one of multiple different ways of doing this without changing __class__.
In the comments I proposed modeling cellular automata as a possible use case for dynamic __class__s. Let's try to flesh out the idea a bit:
Using dynamic __class__:
class Stage(object):
def __init__(self, x, y):
self.x = x
self.y = y
class Stage1(Stage):
def step(self):
if ...:
self.__class__ = Stage2
class Stage2(Stage):
def step(self):
if ...:
self.__class__ = Stage3
cells = [Stage1(x,y) for x in range(rows) for y in range(cols)]
def step(cells):
for cell in cells:
cell.step()
yield cells
For lack of a better term, I'm going to call this
The traditional way: (mainly abarnert's code)
class Stage1(object):
def step(self, cell):
...
if ...:
cell.goToStage2()
class Stage2(object):
def step(self, cell):
...
if ...:
cell.goToStage3()
class Cell(object):
def __init__(self, x, y):
self.x = x
self.y = y
self.current_stage = Stage1()
def goToStage2(self):
self.current_stage = Stage2()
def __getattr__(self, attr):
return getattr(self.current_stage, attr)
cells = [Cell(x,y) for x in range(rows) for y in range(cols)]
def step(cells):
for cell in cells:
cell.step(cell)
yield cells
Comparison:
The traditional way creates a list of Cell instances each with a
current stage attribute.
The dynamic __class__ way creates a list of instances which are
subclasses of Stage. There is no need for a current stage
attribute since __class__ already serves this purpose.
The traditional way uses goToStage2, goToStage3, ... methods to
switch stages.
The dynamic __class__ way requires no such methods. You just
reassign __class__.
The traditional way uses the special method __getattr__ to delegate
some method calls to the appropriate stage instance held in the
self.current_stage attribute.
The dynamic __class__ way does not require any such delegation. The
instances in cells are already the objects you want.
The traditional way needs to pass the cell as an argument to
Stage.step. This is so cell.goToStageN can be called.
The dynamic __class__ way does not need to pass anything. The
object we are dealing with has everything we need.
Conclusion:
Both ways can be made to work. To the extent that I can envision how these two implementations would pan-out, it seems to me the dynamic __class__ implementation will be
simpler (no Cell class),
more elegant (no ugly goToStage2 methods, no brain-teasers like why
you need to write cell.step(cell) instead of cell.step()),
and easier to understand (no __getattr__, no additional level of
indirection)

Python idiom for dict-able classes?

I want to do something like this:
class Dictable:
def dict(self):
raise NotImplementedError
class Foo(Dictable):
def dict(self):
return {'bar1': self.bar1, 'bar2': self.bar2}
Is there a more pythonic way to do this? For example, is it possible to overload the built-in conversion dict(...)? Note that I don't necessarily want to return all the member variables of Foo, I'd rather have each class decide what to return.
Thanks.
The Pythonic way depends on what you want to do. If your objects shouldn't be regarded as mappings in their own right, then a dict method is perfectly fine, but you shouldn't "overload" dict to handle dictables. Whether or not you need the base class depends on whether you want to do isinstance(x, Dictable); note that hasattr(x, "dict") would serve pretty much the same purpose.
If the classes are conceptually mappings of keys to values, then implementing the Mapping protocol seems appropriate. I.e., you'd implement
__getitem__
__iter__
__len__
and inherit from collections.Mapping to get the other methods. Then you get dict(Foo()) for free. Example:
class Foo(Mapping):
def __getitem__(self, key):
if key not in ("bar1", "bar2"):
raise KeyError("{} not found".format(repr(key))
return getattr(self, key)
def __iter__(self):
yield "bar1"
yield "bar2"
def __len__(self):
return 2
Firstly, look at collections.ABC, which describes the Python abstract base class protocol (equivalent to interfaces in static languages).
Then, decide if you want to write your own ABC or make use of an existing one; in this case, Mapping might be what you want.
Note that although the dict constructor (i.e. dict(my_object)) is not overrideable, if it encounters an iterable object that yields a sequence of key-value pairs, it will construct a dict from that; i.e. (Python 2; for Python 3 replace items with iteritems):
def __iter__(self):
return {'bar1': self.bar1, 'bar2': self.bar2}.iteritems()
However, if your classes are intended to behave like a dict you shouldn't do this as it's different from the expected behaviour of a Mapping instance, which is to iterate over keys, not key-value pairs. In particular it would cause for .. in to behave incorrectly.
Most of the answers here are about making your class behave like a dict, which isn't actually what you asked. If you want to express the idea, "I am a class that can be turned into a dict," I would simply define a bunch of classes and have them each implement .dict(). Python favors duck-typing (what an object can do) over what an object is. The ABC doesn't add much. Documentation serves the same purpose.
You can certainly overload dict() but you almost never want to! Too many aspects of the standard library depend upon dict being available and you will break most of its functionality. You cab probably do something like this though:
class Dictable:
def dict(self):
return self.__dict__

Accessing private variables when there's a getter/setter for them

I have a question about righteous way of programming in Python... Maybe there can be several different opinions, but here it goes:
Let's say I have a class with a couple of private attributes and that I have implemented two getters/setters (not overloading __getattr__ and __setattr__, but in a more “Java-tistic” style):
class MyClass:
def __init__(self):
self.__private1 = "Whatever1"
def setPrivate1(self, private1):
if isinstance(private1, str) and (private1.startswith("private")):
self.__private1 = private1
else:
raise AttributeError("Kaputt")
def getPrivate1(self):
return self.__private1
Now let's say a few lines below, in another method of the same class, I need to re-set the value of that “__private1”. Since it's the same class, I still have direct access to the private attribute self.__private1.
My question is: Should I use:
self.setPrivate1("privateBlaBlaBla")
or should I access directly as:
self.__private1 ="privateBlaBlaBla"
since I am the one setting the new value, I know that said value (“privateBlaBlaBla”) is correct (an str() that starts with “private”), so it is not going to leave the system inconsistent. On the other hand, if another programmer takes my code, and needs to change the functionality for the self.__private1 attribute, he will need to go through all the code, and see if the value of __private1 has been manually set somewhere else.
My guess is that the right thing to do is to always using the setPrivate1 method, and only access directly the __private1 variable in the get/set, but I'd like to know the opinion of more experienced Python programmers.
You can't present a classic example of bad Python and then expect people to have opinions on what do to about it. Use getters and setters.
class MyClass:
def __init__(self):
self._private1 = "Whatever1"
#property
def private1(self):
return self._private1
#private1.setter
def private1(self, value):
self._private1 = value
A side comment -- using double underscore names can be confusing, because Python actually mangles the name to stop you accessing them from outside the class. This provides no real security, but causes no end of headaches. The easiest way to avoid the headaches is to use single-underscore names, which is basically a universal convention for private. (Ish.)
If you want an opinion -- use properties =). If you want an opinion on your JavaPython monstrosity, I would use the setter -- after all, you've written it, that's what it's there for! There's no obvious benefit to setting the variable by hand, but there are several drawbacks.
Neither. In Python, use properties, not getters and setters.
class MyClass:
def __init__(self):
self._private1 = "Whatever1"
#property
def private1(self):
return self._private1
#private1.setter
def private1(self, private1):
if isinstance(private1, str) and (private1.startswith("private")):
self._private1 = private1
else:
raise AttributeError("Kaputt")
Then later on in your code, set the _private1 attribute with
self.private1="privateBlaBlaBla"

Intercepting changes of attributes in classes within a class - Python

I have been messing around with pygame and python and I want to be able to call a function when an attribute of my class has changed. My current solution being:
class ExampleClass(parentClass):
def __init__(self):
self.rect = pygame.rect.Rect(0,0,100,100)
def __setattr__(self, name, value):
parentClass.__setattr__(self,name,value)
dofancystuff()
Firstclass = ExampleClass()
This works fine and dofancystuff is called when I change the rect value with Firsclass.rect = pygame.rect.Rect(0,0,100,100). However if I say Firstclass.rect.bottom = 3. __setattr__ and there for dofancystuff is not called.
So my question I guess is how can I intercept any change to an attribute of a subclass?
edit: Also If I am going about this the wrong way please do tell I'm not very knowledgable when it comes to python.
Well, the simple answer is you can't. In the case of Firstclass.rect = <...> your __setattr__ is called. But in the case of Firstclass.rect.bottom = 3 the __setattr__ method of the Rect instance is called. The only solution I see, is to create a derived class of pygame.rect.Rect where you overwrite the __setattr__ method. You can also monkey patch the Rect class, but this is discouraged for good reasons.
You could try __getattr__, which should be called on Firstclass.rect.
But do this instead: Create a separate class (subclass of pygame.rect?) for ExampleClass.rect. Implement __setattr__ in that class. Now you will be told about anything that gets set in your rect member for ExampleClass. You can still implement __setattr__ in ExampleClass (and should), only now make sure you instantiate a version of your own rect class...
BTW: Don't call your objects Firstclass, as then it looks like a class as opposed to an object...
This isn't answering the question but it is relevant:
self.__dict__[name] = value
is probably better than
parentClass.__setattr__(self, name, value)
This is recommended by the Python documentation (setattr">http://docs.python.org/2/reference/datamodel.html?highlight=setattr#object.setattr) and makes more sense anyway in the general case since it does not assume anything about the behaviour of parentClass setattr.
Yay for unsolicited advice four years too late!
I think the reason why you have this difficulty deserves a little more information than is provided by the other answers.
The problem is, when you do:
myObject.attribute.thing = value
You're not assigning a value to attribute. The code is equivalent to this:
anAttribute = myObject.attribute
anAttribute.thing = value
As it's seen by myObject, all you're doing it getting the attribute; you're not setting the attribute.
Making subclasses of your attributes that you control, and can define __setattr__ for is one solution.
An alternative solution, that may make sense if you have lots of attributes of different types and don't want to make lots of individual subclasses for all of them, is to override __getattribute__ or __getattr__ to return a facade to the attribute that performs the relevant operations in its __setattr__ method. I've not attempted to do this myself, but I imagine that you should be able to make a simple facade class that will act as a facade for any object.
Care would need to be taken in the choice of __getattribute__ and __getattr__. See the documentation linked in the previous sentence for information, but basically if __getattr__ is used, the actual attributes will have top be encapsulated/obfuscated somehow so that __getattr__ handles requests for them, and if __getattribute__ is used, it'll have to retrieve attributes via calls to a base class.
If all you're trying to do is determine if some rects have been updated, then this is overkill.

Categories