how could a class inherit from another class after create as instance - python

I am wondering if I can able to inherit one of class after the instance has been initialized. As following I have two class as Child and 'Outfit' and after create a instance as child. I would like it to be inherited from class Outfit so I can use the function get_cloth to return blue. For some reason, I do not want Child to inherit from Outfit class
class Outfit(object):
def __init__(self):
self.cloth = 'blue' if self.gender == 'M' else 'red'
def get_cloth(self):
return self.cloth
class Child:
def __init__(self, name, gender):
self.name = name
self.gender = gender
child = Child('tom', 'M')
print(child.get_cloth())

It doesn't make sense to do what you're trying to do. Objects don't inherit, only types do. The instance child has a type that can inherit (be a subtype of) another type, but that type is Child, and you said that you don't want Child to inherit from Outfit.
This isn't something that necessarily has to be true for any object-oriented programming language; rather, it's the fundamental difference between class-based OO (Python, Ruby, Smalltalk, C++, Java, etc.) and prototype-based OO (JavaScript, Self, IO, and a few other languages you've never heard of).
There are some things you can do that are somewhat close to what you're asking for, but they're all things you very rarely want to do.
If you're interested in subtyping—that is, making isinstance(child, Outfit) true:
You can change Child to inherit from Outfit after it's been created (e.g., by setting its __mro__ attribute). You can even after the child instance has been created. Then child will now be an instance of something that subtypes Outfit.
You can change the type of child to a different type after creation (by setting its __class__ attribute). But you'd still need to create some type that is a subtype of Outfit. Maybe a type that's a copy of the Child class but with different bases, or maybe it's an empty class whose bases are Child and Outfit.
You can make Child into a "virtual subclass" of Outfit by defining an Outfit.__subclasscheck__ method. This will work even after Outfit, Child, and child have all been created. In this case, although isinstance(child, Outfit) will pass, it's not actually going to act like an Outfit; you're just fooling isinstance. Or, put a different way, you're getting nominal subtyping without behavioral subtyping.
You can make child into a "virtual instance" of Outfit, without making Child into a virtual subclass, by defining an Outfit.__instancecheck__ method.
You can build a Prototype metaclass that effectively flattens out the distinction between classes and instances for its types, and instead gives you something like the Self or JavaScript object model. This is a lot of work, and your Prototype classes will do weird things when interacting with "normal" classes, but it is doable. Having done that, you can then modify child's prototype chain however you want.
If you're only interested in inheriting behavior—not just duck typing as an Outfit, but letting Outfit do the work for you—the easiest way to do that is to forget about types and just compose an Outfit instance into child and forward. But unfortunately, that's not as easy as it sounds:
You can define a __getattr__. But that won't actually be called for child.spam, only for getattr(child, 'spam'); only the type's __getattr__ makes a difference for normal member access. If that matters (and it almost certainly does) you need to build a new type and reassign __class__, as described earlier.
You can iterate all of the methods of Outfit, or of the composed outfit instance, and explicitly add forwarding (bound) methods to child. But only a small subset of special methods will look at the instance instead of the class. So, while child.spam() will work, child + 2 will not. If that's a problem (and it usually is, but not quite as universally as the last case), you still need to build a new type and reassign __class__.
Anyway, all of these things are possible, because they are very occasionally useful, but none of them are made easy for you, because they are almost always the wrong thing to do. (If you do decide you want to do any of them, and it isn't obvious how to do it from the docs, and there isn't another question here that explains how, create a new question, explaining clearly which one you want to do and why.)

As described in the previous answers, applying the concept of inheritance at the instance level rather than the class level does not make much sense. If what you really want is just for child to have access to the get_cloth method of Outfit, you can assign it explicitly:
child = Child('tom', 'M')
child.get_cloth = Outfit().get_cloth
print(child.get_cloth())
Note that I use Outfit().get_cloth and not Outfit.get_cloth to extract get_cloth as a method on not just as a function.
The fact that this is easily possible comes down to the design choice of the Python language of having self be an argument to every method, instead of some fixed non-local variable.

You could do this a bit more safely by making Child intentionally pass on unknown attributes:
class Outfit(object):
def __init__(self, child_self):
self.cloth = 'blue' if child_self.gender == 'M' else 'red'
def get_cloth(self):
return self.cloth
class Child:
def __init__(self, name, gender):
self.super_self = None
...
def __getattr__(self, name):
if self.super_self is None:
raise AttributeError(name)
else:
return getattr(self.super_self , name)
child = Child(...)
child.super_self = Outfit(child)
print(child.get_cloth()) # Should work now

Related

Why does the Python's inheritance mechanism allow me to define classes that do not get properly instantiated?

I'm sorry for the rather vague formulation of the question. I'll try to clarify what I mean through a simple example below. I have a class I want to use as a base for other classes:
class parent:
def __init__(self, constant):
self.constant = constant
def addconstant(self, number):
return self.constant + number
The self.constant parameter is paramount for the usability of the class, as the addconstant method depends on it. Therefore the __init__ method takes care of forcing the user to define the value of this parameter. So far so good.
Now, I define a child class like this:
class child(parent):
def __init__(self):
pass
Nothing stops me from creating an instance of this class, but when I try to use the addconstant, it will obviously crash.
b = child()
b.addconstant(5)
AttributeError: 'child' object has no attribute 'constant'
Why does Python allow to do this!? I feel that Python is "too flexible" and somehow breaks the point of OOP and encapsulation. If I want to extend a class to take advantage of inheritance, I must be careful and know certain details of the implementation of the class. In this case, I have to know that forcing the user to set the constant parameter constant is fundamental not to break the usability of the class. Doesn't this somehow break the encapsulation principle?
Because Python is a dynamic language. It doesn't know what attributes are on a parent instance until parent's initializer puts them there. In other words, the attributes of parent are determined when you instantiate parent and not an instant before.
In a language like Java, the attributes of parent would be established at compile time. But Python class definitions are executed, not compiled.
In practice, this isn't really a problem. Yes, if you forget to call the parent class's initializer, you will have trouble. But calling the parent class's initializer is something you pretty much always do, in any language, because you want the parent class's behavior. So, don't forget to do that.
A sometimes-useful technique is to define a reasonable default on the class itself.
class parent:
constant = 0
def __init__(self, constant):
self.constant = constant
def addconstant(self, number):
return self.constant + number
Python falls back to accessing the class's attribute if you haven't defined it on an instance. So, this would provide a fallback value in case you forget to do that.

What' the meaning of the brackets in the class?

In python, when I read others' code, I meet this situation where a class is defined and after it there is a pair of brackets.
class AStarFoodSearchAgent(SearchAgent):
def __init__():
#....
I don't know what is the meaning of '(SearchAgent)',because what I usually meet and use doesn't seem that.
It indicates that AStarFoodSearchAgent is a subclass of SearchAgent. It's part of a concept called inheritance.
What is inheritance?
Here's an example. You might have a Car class, and a RaceCar class. When implementing the RaceCar class, you may find that it has a lot of behavior that is very similar, or exactly the same, as a Car. In that case, you'd make RaceCar a subclass ofCar`.
class Car(object):
#Car is a subclass of Python's base objeect. The reasons for this, and the reasons why you
#see some classes without (object) or any other class between brackets is beyond the scope
#of this answer.
def get_number_of_wheels(self):
return 4
def get_engine(self):
return CarEngine(fuel=30)
class RaceCar(Car):
#Racecar is a subclass of Car
def get_engine(self):
return RaceCarEngine(fuel=50)
my_car = Car() #create a new Car instance
desired_car = RaceCar() #create a new RaceCar instance.
my_car.get_engine() #returns a CarEngine instance
desired_car.get_engine() #returns a RaceCarEngine instance
my_car.get_number_of_wheels() #returns 4.
desired_car.get_number_of_wheels() # also returns 4! WHAT?!?!?!
We didn't define get_number_of_wheels on RaceCar, and still, it exists, and returns 4 when called. That's because RaceCar has inherited get_number_of_wheels from Car. Inheritance is a very nice way to reuse functionality from other classes, and override or add only the functionality that needs to be different.
Your Example
In your example, AStarFoodSearchAgent is a subclass of SearchAgent. This means that it inherits some functionality from SearchAgemt. For instance, SearchAgent might implement a method called get_neighbouring_locations(), that returns all the locations reachable from the agent's current location. It's not necessary to reimplement this, just to make an A* agent.
What's also nice about this, is that you can use this when you expect a certain type of object, but you don't care about the implementation. For instance, a find_food function may expect a SearchAgent object, but it wouldn't care about how it searches. You might have an AStarFoodSearchAgent and a DijkstraFoodSearchAgent. As long as both of them inherit from SearchAgent, find_food can use ìsinstanceto check that the searcher it expects behaves like aSearchAgent. Thefind_food`function might look like this:
def find_food(searcher):
if not isinstance(searcher, SearchAgent):
raise ValueError("searcher must be a SearchAgent instance.")
food = searcher.find_food()
if not food:
raise Exception("No, food. We'll starve!")
if food.type == "sprouts":
raise Exception("Sprouts, Yuk!)
return food
Old/Classic Style Classes
Upto Python 2.1, old-style classes were the only type that existed. Unless they were a subclass of some other class, they wouldn't have any parenthesis after the class name.
class OldStyleCar:
...
New style classes always inherit from something. If you don't want to inherit from any other class, you inherit from object.
class NewStyleCar(object):
...
New style classes unify python types and classes. For instance, the type of 1, which you can obtain by calling type(1) is int, but the type of OldStyleClass() is instance, with new style classes, type(NewStyleCar) is Car.
SearchAgent is the superclass of the class AStarFoodSearchAgent. This basically means that an AStarFoodSearchAgent is a special kind of SearchAgent.
It means that class AStarFoodSearchAgent extends SearchAgent.
Check section 9.5 here
https://docs.python.org/2/tutorial/classes.html
This is inheritance in python, just like in any other OO language
https://docs.python.org/2/tutorial/classes.html#inheritance
It means that SearchAgent is a base class of AStarFoodSearchAgent. In other word, AStarFoodSearchAgent inherits from SearchAgent class.
See Inheritance - Python tutorial.

How dangerous is setting self.__class__ to something else?

Say I have a class, which has a number of subclasses.
I can instantiate the class. I can then set its __class__ attribute to one of the subclasses. I have effectively changed the class type to the type of its subclass, on a live object. I can call methods on it which invoke the subclass's version of those methods.
So, how dangerous is doing this? It seems weird, but is it wrong to do such a thing? Despite the ability to change type at run-time, is this a feature of the language that should completely be avoided? Why or why not?
(Depending on responses, I'll post a more-specific question about what I would like to do, and if there are better alternatives).
Here's a list of things I can think of that make this dangerous, in rough order from worst to least bad:
It's likely to be confusing to someone reading or debugging your code.
You won't have gotten the right __init__ method, so you probably won't have all of the instance variables initialized properly (or even at all).
The differences between 2.x and 3.x are significant enough that it may be painful to port.
There are some edge cases with classmethods, hand-coded descriptors, hooks to the method resolution order, etc., and they're different between classic and new-style classes (and, again, between 2.x and 3.x).
If you use __slots__, all of the classes must have identical slots. (And if you have the compatible but different slots, it may appear to work at first but do horrible things…)
Special method definitions in new-style classes may not change. (In fact, this will work in practice with all current Python implementations, but it's not documented to work, so…)
If you use __new__, things will not work the way you naively expected.
If the classes have different metaclasses, things will get even more confusing.
Meanwhile, in many cases where you'd think this is necessary, there are better options:
Use a factory to create an instance of the appropriate class dynamically, instead of creating a base instance and then munging it into a derived one.
Use __new__ or other mechanisms to hook the construction.
Redesign things so you have a single class with some data-driven behavior, instead of abusing inheritance.
As a very most common specific case of the last one, just put all of the "variable methods" into classes whose instances are kept as a data member of the "parent", rather than into subclasses. Instead of changing self.__class__ = OtherSubclass, just do self.member = OtherSubclass(self). If you really need methods to magically change, automatic forwarding (e.g., via __getattr__) is a much more common and pythonic idiom than changing classes on the fly.
Assigning the __class__ attribute is useful if you have a long time running application and you need to replace an old version of some object by a newer version of the same class without loss of data, e.g. after some reload(mymodule) and without reload of unchanged modules. Other example is if you implement persistency - something similar to pickle.load.
All other usage is discouraged, especially if you can write the complete code before starting the application.
On arbitrary classes, this is extremely unlikely to work, and is very fragile even if it does. It's basically the same thing as pulling the underlying function objects out of the methods of one class, and calling them on objects which are not instances of the original class. Whether or not that will work depends on internal implementation details, and is a form of very tight coupling.
That said, changing the __class__ of objects amongst a set of classes that were particularly designed to be used this way could be perfectly fine. I've been aware that you can do this for a long time, but I've never yet found a use for this technique where a better solution didn't spring to mind at the same time. So if you think you have a use case, go for it. Just be clear in your comments/documentation what is going on. In particular it means that the implementation of all the classes involved have to respect all of their invariants/assumptions/etc, rather than being able to consider each class in isolation, so you'd want to make sure that anyone who works on any of the code involved is aware of this!
Well, not discounting the problems cautioned about at the start. But it can be useful in certain cases.
First of all, the reason I am looking this post up is because I did just this and __slots__ doesn't like it. (yes, my code is a valid use case for slots, this is pure memory optimization) and I was trying to get around a slots issue.
I first saw this in Alex Martelli's Python Cookbook (1st ed). In the 3rd ed, it's recipe 8.19 "Implementing Stateful Objects or State Machine Problems". A fairly knowledgeable source, Python-wise.
Suppose you have an ActiveEnemy object that has different behavior from an InactiveEnemy and you need to switch back and forth quickly between them. Maybe even a DeadEnemy.
If InactiveEnemy was a subclass or a sibling, you could switch class attributes. More exactly, the exact ancestry matters less than the methods and attributes being consistent to code calling it. Think Java interface or, as several people have mentioned, your classes need to be designed with this use in mind.
Now, you still have to manage state transition rules and all sorts of other things. And, yes, if your client code is not expecting this behavior and your instances switch behavior, things will hit the fan.
But I've used this quite successfully on Python 2.x and never had any unusual problems with it. Best done with a common parent and small behavioral differences on subclasses with the same method signatures.
No problems, until my __slots__ issue that's blocking it just now. But slots are a pain in the neck in general.
I would not do this to patch live code. I would also privilege using a factory method to create instances.
But to manage very specific conditions known in advance? Like a state machine that the clients are expected to understand thoroughly? Then it is pretty darn close to magic, with all the risk that comes with it. It's quite elegant.
Python 3 concerns? Test it to see if it works but the Cookbook uses Python 3 print(x) syntax in its example, FWIW.
The other answers have done a good job of discussing the question of why just changing __class__ is likely not an optimal decision.
Below is one example of a way to avoid changing __class__ after instance creation, using __new__. I'm not recommending it, just showing how it could be done, for the sake of completeness. However it is probably best to do this using a boring old factory rather than shoe-horning inheritance into a job for which it was not intended.
class ChildDispatcher:
_subclasses = dict()
def __new__(cls, *args, dispatch_arg, **kwargs):
# dispatch to a registered child class
subcls = cls.getsubcls(dispatch_arg)
return super(ChildDispatcher, subcls).__new__(subcls)
def __init_subclass__(subcls, **kwargs):
super(ChildDispatcher, subcls).__init_subclass__(**kwargs)
# add __new__ contructor to child class based on default first dispatch argument
def __new__(cls, *args, dispatch_arg = subcls.__qualname__, **kwargs):
return super(ChildDispatcher,cls).__new__(cls, *args, **kwargs)
subcls.__new__ = __new__
ChildDispatcher.register_subclass(subcls)
#classmethod
def getsubcls(cls, key):
name = cls.__qualname__
if cls is not ChildDispatcher:
raise AttributeError(f"type object {name!r} has no attribute 'getsubcls'")
try:
return ChildDispatcher._subclasses[key]
except KeyError:
raise KeyError(f"No child class key {key!r} in the "
f"{cls.__qualname__} subclasses registry")
#classmethod
def register_subclass(cls, subcls):
name = subcls.__qualname__
if cls is not ChildDispatcher:
raise AttributeError(f"type object {name!r} has no attribute "
f"'register_subclass'")
if name not in ChildDispatcher._subclasses:
ChildDispatcher._subclasses[name] = subcls
else:
raise KeyError(f"{name} subclass already exists")
class Child(ChildDispatcher): pass
c1 = ChildDispatcher(dispatch_arg = "Child")
assert isinstance(c1, Child)
c2 = Child()
assert isinstance(c2, Child)
How "dangerous" it is depends primarily on what the subclass would have done when initializing the object. It's entirely possible that it would not be properly initialized, having only run the base class's __init__(), and something would fail later because of, say, an uninitialized instance attribute.
Even without that, it seems like bad practice for most use cases. Easier to just instantiate the desired class in the first place.
Here's an example of one way you could do the same thing without changing __class__. Quoting #unutbu in the comments to the question:
Suppose you were modeling cellular automata. Suppose each cell could be in one of say 5 Stages. You could define 5 classes Stage1, Stage2, etc. Suppose each Stage class has multiple methods.
class Stage1(object):
…
class Stage2(object):
…
…
class Cell(object):
def __init__(self):
self.current_stage = Stage1()
def goToStage2(self):
self.current_stage = Stage2()
def __getattr__(self, attr):
return getattr(self.current_stage, attr)
If you allow changing __class__ you could instantly give a cell all the methods of a new stage (same names, but different behavior).
Same for changing current_stage, but this is a perfectly normal and pythonic thing to do, that won't confuse anyone.
Plus, it allows you to not change certain special methods you don't want changed, just by overriding them in Cell.
Plus, it works for data members, class methods, static methods, etc., in ways every intermediate Python programmer already understands.
If you refuse to change __class__, then you might have to include a stage attribute, and use a lot of if statements, or reassign a lot of attributes pointing to different stage's functions
Yes, I've used a stage attribute, but that's not a downside—it's the obvious visible way to keep track of what the current stage is, better for debugging and for readability.
And there's not a single if statement or any attribute reassignment except for the stage attribute.
And this is just one of multiple different ways of doing this without changing __class__.
In the comments I proposed modeling cellular automata as a possible use case for dynamic __class__s. Let's try to flesh out the idea a bit:
Using dynamic __class__:
class Stage(object):
def __init__(self, x, y):
self.x = x
self.y = y
class Stage1(Stage):
def step(self):
if ...:
self.__class__ = Stage2
class Stage2(Stage):
def step(self):
if ...:
self.__class__ = Stage3
cells = [Stage1(x,y) for x in range(rows) for y in range(cols)]
def step(cells):
for cell in cells:
cell.step()
yield cells
For lack of a better term, I'm going to call this
The traditional way: (mainly abarnert's code)
class Stage1(object):
def step(self, cell):
...
if ...:
cell.goToStage2()
class Stage2(object):
def step(self, cell):
...
if ...:
cell.goToStage3()
class Cell(object):
def __init__(self, x, y):
self.x = x
self.y = y
self.current_stage = Stage1()
def goToStage2(self):
self.current_stage = Stage2()
def __getattr__(self, attr):
return getattr(self.current_stage, attr)
cells = [Cell(x,y) for x in range(rows) for y in range(cols)]
def step(cells):
for cell in cells:
cell.step(cell)
yield cells
Comparison:
The traditional way creates a list of Cell instances each with a
current stage attribute.
The dynamic __class__ way creates a list of instances which are
subclasses of Stage. There is no need for a current stage
attribute since __class__ already serves this purpose.
The traditional way uses goToStage2, goToStage3, ... methods to
switch stages.
The dynamic __class__ way requires no such methods. You just
reassign __class__.
The traditional way uses the special method __getattr__ to delegate
some method calls to the appropriate stage instance held in the
self.current_stage attribute.
The dynamic __class__ way does not require any such delegation. The
instances in cells are already the objects you want.
The traditional way needs to pass the cell as an argument to
Stage.step. This is so cell.goToStageN can be called.
The dynamic __class__ way does not need to pass anything. The
object we are dealing with has everything we need.
Conclusion:
Both ways can be made to work. To the extent that I can envision how these two implementations would pan-out, it seems to me the dynamic __class__ implementation will be
simpler (no Cell class),
more elegant (no ugly goToStage2 methods, no brain-teasers like why
you need to write cell.step(cell) instead of cell.step()),
and easier to understand (no __getattr__, no additional level of
indirection)

What is the correct way to extend a parent class method in modern Python

I frequently do this sort of thing:
class Person(object):
def greet(self):
print "Hello"
class Waiter(Person):
def greet(self):
Person.greet(self)
print "Would you like fries with that?"
The line Person.greet(self) doesn't seem right. If I ever change what class Waiter inherits from I'm going to have to track down every one of these and replace them all.
What is the correct way to do this is modern Python? Both 2.x and 3.x, I understand there were changes in this area in 3.
If it matters any I generally stick to single inheritance, but if extra stuff is required to accommodate multiple inheritance correctly it would be good to know about that.
You use super:
Return a proxy object that delegates
method calls to a parent or sibling
class of type. This is useful for
accessing inherited methods that have
been overridden in a class. The search
order is same as that used by
getattr() except that the type itself
is skipped.
In other words, a call to super returns a fake object which delegates attribute lookups to classes above you in the inheritance chain. Points to note:
This does not work with old-style classes -- so if you are using Python 2.x, you need to ensure that the top class in your hierarchy inherits from object.
You need to pass your own class and instance to super in Python 2.x. This requirement was waived in 3.x.
This will handle all multiple inheritance correctly. (When you have a multiple inheritance tree in Python, a method resolution order is generated and the lookups go through parent classes in this order.)
Take care: there are many places to get confused about multiple inheritance in Python. You might want to read super() Considered Harmful. If you are sure that you are going to stick to a single inheritance tree, and that you are not going to change the names of classes in said tree, you can hardcode the class names as you do above and everything will work fine.
Not sure if you're looking for this but you can call a parent without referring to it by doing this.
super(Waiter, self).greet()
This will call the greet() function in Person.
katrielalex's answer is really the answer to your question, but this wouldn't fit in a comment.
If you plan to go about using super everywhere, and you ever think in terms of multiple inheritance, definitely read the "super() Considered Harmful" link. super() is a great tool, but it takes understanding to use correctly. In my experience, for simple things that don't seem likely to get into complicated diamond inheritance tangles, it's actually easier and less tedious to just call the superclass directly and deal with the renames when you change the name of the base class.
In fact, in Python2 you have to include the current class name, which is usually more likely to change than the base class name. (And in fact sometimes it's very difficult to pass a reference to the current class if you're doing wacky things; at the point when the method is being defined the class isn't bound to any name, and at the point when the super call is executed the original name of the class may not still be bound to the class, such as when you're using a class decorator)
I'd like to make it more explicit in this answer with an example. It's just like how we do in JavaScript. The short answer is, do that like we initiate the constructor using super.
class Person(object):
def __init__(self, name):
self.name = name
def greet(self):
print(f"Hello, I'm {self.name}")
class Waiter(Person):
def __init__(self, name):
super().__init__(name)
# initiate the parent constructor
# or super(Waiter, self).__init__(name)
def greet(self):
super(Waiter, self).greet()
print("Would you like fries with that?")
waiter = Waiter("John")
waiter.greet()
# Hello, I'm John
# Would you like fries with that?

Disable class instance methods

How can I quickly disable all methods in a class instance based on a condition? My naive solution is to override using the __getattr__ but this is not called when the function name exists already.
class my():
def method1(self):
print 'method1'
def method2(self):
print 'method2'
def __getattr__(self, name):
print 'Fetching '+str(name)
if self.isValid():
return getattr(self, name)
def isValid(self):
return False
if __name__ == '__main__':
m=my()
m.method1()
The equivalent of what you want to do is actually to override __getattribute__, which is going to be called for every attribute access. Besides it being very slow, take care: by definition of every, that includes e.g. the call to self.isValid within __getattribute__'s own body, so you'll have to use some circuitous route to access that attribute (type(self).isValid(self) should work, for example, as it gets the attribute from the class, not from the instance).
This points to a horrible terminological confusion: this is not disabling "method from a class", but from an instance, and in particular has nothing to do with classmethods. If you do want to work in a similar way on a class basis, rather than an instance basis, you'll need to make a custom metaclass and override __getattribute__ on the metaclass (that's the one that's called when you access attributes on the class -- as you're asking in your title and text -- rather than on the instance -- as you in fact appear to be doing, which is by far the more normal and usual case).
Edit: a completely different approach might be to use a peculiarly Pythonic pathway to implementing the State design pattern: class-switching. E.g.:
class _NotValid(object):
def isValid(self):
return False
def setValid(self, yesno):
if yesno:
self.__class__ = TheGoodOne
class TheGoodOne(object):
def isValid(self):
return True
def setValid(self, yesno):
if not yesno:
self.__class__ = _NotValid
# write all other methods here
As long as you can call setValid appropriately, so that the object's __class__ is switched appropriately, this is very fast and simple -- essentially, the object's __class__ is where all the object's methods are found, so by switching it you switch, en masse, the set of methods that exist on the object at a given time. However, this does not work if you absolutely insist that validity checking must be performed "just in time", i.e. at the very instant the object's method is being looked up.
An intermediate approach between this and the __getattribute__ one would be to introduce an extra level of indirection (which is popularly held to be the solution to all problems;-), along the lines of:
class _Valid(object):
def __init__(self, actualobject):
self._actualobject = actualobject
# all actual methods go here
# keeping state in self._actualobject
class Wrapit(object):
def __init__(self):
self._themethods = _Valid(self)
def isValid(self):
# whatever logic you want
# (DON'T call other self. methods!-)
return False
def __getattr__(self, n):
if self.isValid():
return getattr(self._themethods, n)
raise AttributeError(n)
This is more idiomatic than __getattribute__ because it relies on the fact that __getattr__ is only called for attributes that aren't found in other ways -- so the object can hold normal state (data) in its __dict__, and that will be accessed without any big overhead; only method calls pay the extra overhead of indiretion. The _Valid class instances can keep some or all state in their respective self._actualobject, if any of the state needs to stay accessible on invalid objects (so that the invalid state disable methods, but not data attributes access; it's not clear from your Q if that's needed, but it's a free extra possibility offered by this approach). This idiom is less error-prone than __getattribute__, since state can be accessed more directly in the methods (without triggering validity checks).
As presented, the solution creates a circular reference loop, which may impose a bit of overhead in terms of garbage collection. If that's a problem in your application, use the weakref module from the standard Python library, of course -- that module is generally the simplest way to remove circular loops of references, if and when they're a problem.
(E.g., make the _actualobject attribute of _Valid class instances a weak reference to the object that holds that instance as its _themethods attribute).

Categories