I have been messing around with pygame and python and I want to be able to call a function when an attribute of my class has changed. My current solution being:
class ExampleClass(parentClass):
def __init__(self):
self.rect = pygame.rect.Rect(0,0,100,100)
def __setattr__(self, name, value):
parentClass.__setattr__(self,name,value)
dofancystuff()
Firstclass = ExampleClass()
This works fine and dofancystuff is called when I change the rect value with Firsclass.rect = pygame.rect.Rect(0,0,100,100). However if I say Firstclass.rect.bottom = 3. __setattr__ and there for dofancystuff is not called.
So my question I guess is how can I intercept any change to an attribute of a subclass?
edit: Also If I am going about this the wrong way please do tell I'm not very knowledgable when it comes to python.
Well, the simple answer is you can't. In the case of Firstclass.rect = <...> your __setattr__ is called. But in the case of Firstclass.rect.bottom = 3 the __setattr__ method of the Rect instance is called. The only solution I see, is to create a derived class of pygame.rect.Rect where you overwrite the __setattr__ method. You can also monkey patch the Rect class, but this is discouraged for good reasons.
You could try __getattr__, which should be called on Firstclass.rect.
But do this instead: Create a separate class (subclass of pygame.rect?) for ExampleClass.rect. Implement __setattr__ in that class. Now you will be told about anything that gets set in your rect member for ExampleClass. You can still implement __setattr__ in ExampleClass (and should), only now make sure you instantiate a version of your own rect class...
BTW: Don't call your objects Firstclass, as then it looks like a class as opposed to an object...
This isn't answering the question but it is relevant:
self.__dict__[name] = value
is probably better than
parentClass.__setattr__(self, name, value)
This is recommended by the Python documentation (setattr">http://docs.python.org/2/reference/datamodel.html?highlight=setattr#object.setattr) and makes more sense anyway in the general case since it does not assume anything about the behaviour of parentClass setattr.
Yay for unsolicited advice four years too late!
I think the reason why you have this difficulty deserves a little more information than is provided by the other answers.
The problem is, when you do:
myObject.attribute.thing = value
You're not assigning a value to attribute. The code is equivalent to this:
anAttribute = myObject.attribute
anAttribute.thing = value
As it's seen by myObject, all you're doing it getting the attribute; you're not setting the attribute.
Making subclasses of your attributes that you control, and can define __setattr__ for is one solution.
An alternative solution, that may make sense if you have lots of attributes of different types and don't want to make lots of individual subclasses for all of them, is to override __getattribute__ or __getattr__ to return a facade to the attribute that performs the relevant operations in its __setattr__ method. I've not attempted to do this myself, but I imagine that you should be able to make a simple facade class that will act as a facade for any object.
Care would need to be taken in the choice of __getattribute__ and __getattr__. See the documentation linked in the previous sentence for information, but basically if __getattr__ is used, the actual attributes will have top be encapsulated/obfuscated somehow so that __getattr__ handles requests for them, and if __getattribute__ is used, it'll have to retrieve attributes via calls to a base class.
If all you're trying to do is determine if some rects have been updated, then this is overkill.
Related
I have a class
class A:
def sample_method():
I would like to decorate class A sample_method() and override the contents of sample_method()
class DecoratedA(A):
def sample_method():
The setup above resembles inheritance, but I need to keep the preexisting instance of class A when the decorated function is used.
a # preexisting instance of class A
decorated_a = DecoratedA(a)
decorated_a.functionInClassA() #functions in Class A called as usual with preexisting instance
decorated_a.sample_method() #should call the overwritten sample_method() defined in DecoratedA
What is the proper way to go about this?
There isn't a straightforward way to do what you're asking. Generally, after an instance has been created, it's too late to mess with the methods its class defines.
There are two options you have, as far as I see it. Either you create a wrapper or proxy object for your pre-existing instance, or you modify the instance to change its behavior.
A proxy defers most behavior to the object itself, while only adding (or overriding) some limited behavior of its own:
class Proxy:
def __init__(self, obj):
self.obj = obj
def overridden_method(self): # add your own limited behavior for a few things
do_stuff()
def __getattr__(self, name): # and hand everything else off to the other object
return getattr(self.obj, name)
__getattr__ isn't perfect here, it can only work for regular methods, not special __dunder__ methods that are often looked up directly in the class itself. If you want your proxy to match all possible behavior, you probably need to add things like __add__ and __getitem__, but that might not be necessary in your specific situation (it depends on what A does).
As for changing the behavior of the existing object, one approach is to write your subclass, and then change the existing object's class to be the subclass. This is a little sketchy, since you won't have ever initialized the object as the new class, but it might work if you're only modifying method behavior.
class ModifiedA(A):
def overridden_method(self): # do the override in a normal subclass
do_stuff()
def modify_obj(obj): # then change an existing object's type in place!
obj.__class__ = ModifiedA # this is not terribly safe, but it can work
You could also consider adding an instance variable that would shadow the method you want to override, rather than modifying __class__. Writing the function could be a little tricky, since it won't get bound to the object automatically when called (that only happens for functions that are attributes of a class, not attributes of an instance), but you could probably do the binding yourself (with partial or lambda if you need to access self.
First, why not just define it from the beginning, how you want it, instead of decorating it?
Second, why not decorate the method itself?
To answer the question:
You can reassign it
class A:
def sample_method(): ...
pass
A.sample_method = DecoratedA.sample_method;
but that affects every instance.
Another solution is to reassign the method for just one object.
import functools;
a.sample_method = functools.partial(DecoratedA.sample_method, a);
Another solution is to (temporarily) change the type of an existing object.
a = A();
a.__class__ = DecoratedA;
a.sample_method();
a.__class__ = A;
My questions concern instance variables that are initialized in methods outside the class constructor. This is for Python.
I'll first state what I understand:
Classes may define a constructor, and it may also define other methods.
Instance variables are generally defined/initialized within the constructor.
But instance variables can also be defined/initialized outside the constructor, e.g. in the other methods of the same class.
An example of (2) and (3) -- see self.meow and self.roar in the Cat class below:
class Cat():
def __init__(self):
self.meow = "Meow!"
def meow_bigger(self):
self.roar = "Roar!"
My questions:
Why is it best practice to initialize the instance variable within the constructor?
What general/specific mess could arise if instance variables are regularly initialized in methods other than the constructor? (E.g. Having read Mark Lutz's Tkinter guide in his Programming Python, which I thought was excellent, I noticed that the instance variable used to hold the PhotoImage objects/references were initialized in the further methods, not in the constructor. It seemed to work without issue there, but could that practice cause issues in the long run?)
In what scenarios would it be better to initialize instance variables in the other methods, rather than in the constructor?
To my knowledge, instance variables exist not when the class object is created, but after the class object is instantiated. Proceeding upon my code above, I demonstrate this:
>> c = Cat()
>> c.meow
'Meow!'
>> c.roar
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'Cat' object has no attribute 'roar'
>>> c.meow_bigger()
>>> c.roar
'Roar!'
As it were:
I cannot access the instance variable (c.roar) at first.
However, after I have called the instance method c.meow_bigger() once, I am suddenly able to access the instance variable c.roar.
Why is the above behaviour so?
Thank you for helping out with my understanding.
Why is it best practice to initialize the instance variable within the
constructor?
Clarity.
Because it makes it easy to see at a glance all of the attributes of the class. If you initialize the variables in multiple methods, it becomes difficult to understand the complete data structure without reading every line of code.
Initializing within the __init__ also makes documentation easier. With your example, you can't write "an instance of Cat has a roar attribute". Instead, you have to add a paragraph explaining that an instance of Cat might have a "roar" attribute, but only after calling the "meow_louder" method.
Clarity is king. One of the smartest programmers I ever met once told me "show me your data structures, and I can tell you how your code works without seeing any of your code". While that's a tiny bit hyperbolic, there's definitely a ring of truth to it. One of the biggest hurdles to learning a code base is understanding the data that it manipulates.
What general/specific mess could arise if instance variables are
regularly initialized in methods other than the constructor?
The most obvious one is that an object may not have an attribute available during all parts of the program, leading to having to add a lot of extra code to handle the case where the attribute is undefined.
In what scenarios would it be better to initialize instance variables
in the other methods, rather than in the constructor?
I don't think there are any.
Note: you don't necessarily have to initialize an attribute with it's final value. In your case it's acceptable to initialize roar to None. The mere fact that it has been initialized to something shows that it's a piece of data that the class maintains. It's fine if the value changes later.
Remember that class members in "pure" Python are just a dictionary. Members aren't added to an instance's dictionary until you run the function in which they are defined. Ideally this is the constructor, because that then guarantees that your members will all exist regardless of the order that your functions are called.
I believe your example above could be translated to:
class Cat():
def __init__(self):
self.__dict__['meow'] = "Meow!"
def meow_bigger(self):
self.__dict__['roar'] = "Roar!"
>>> c = Cat() # c.__dict__ = { 'meow': "Meow!" }
>>> c.meow_bigger() # c.__dict__ = { 'meow': "Meow!", 'roar': "Roar!" }
To initialize instance variables within the constructor, is - as you already pointed out - only recommended in python.
First of all, defining all instance variables within the constructor is a good way to document a class. Everybody, seeing the code, knows what kind of internal state an instance has.
Secondly, order matters. if one defines an instance variable V in a function A and there is another function B also accessing V, it is important to call A before B. Otherwise B will fail since V was never defined. Maybe, A has to be invoked before B, but then it should be ensured by an internal state, which would be an instance variable.
There are many more examples. Generally it is just a good idea to define everything in the __init__ method, and set it to None if it can not / should not be initialized at initialization.
Of course, one could use hasattr method to derive some information of the state. But, also one could check if some instance variable V is for example None, which can imply the same then.
So in my opinion, it is never a good idea to define an instance variable anywhere else as in the constructor.
Your examples state some basic properties of python. An object in Python is basically just a dictionary.
Lets use a dictionary: One can add functions and values to that dictionary and construct some kind of OOP. Using the class statement just brings everything into a clean syntax and provides extra stuff like magic methods.
In other languages all information about instance variables and functions are present before the object was initialized. Python does that at runtime. You can also add new methods to any object outside the class definition: Adding a Method to an Existing Object Instance
3.) But instance variables can also be defined/initialized outside the constructor, e.g. in the other methods of the same class.
I'd recommend providing a default state in initialization, just so its clear what the class should expect. In statically typed languages, you'd have to do this, and it's good practice in python.
Let's convey this by replacing the variable roar with a more meaningful variable like has_roared.
In this case, your meow_bigger() method now has a reason to set has_roar. You'd initialize it to false in __init__, as the cat has not roared yet upon instantiation.
class Cat():
def __init__(self):
self.meow = "Meow!"
self.has_roared = False
def meow_bigger(self):
print self.meow + "!!!"
self.has_roared = True
Now do you see why it often makes sense to initialize attributes with default values?
All that being said, why does python not enforce that we HAVE to define our variables in the __init__ method? Well, being a dynamic language, we can now do things like this.
>>> cat1 = Cat()
>>> cat2 = Cat()
>>> cat1.name = "steve"
>>> cat2.name = "sarah"
>>> print cat1.name
... "steve"
The name attribute was not defined in the __init__ method, but we're able to add it anyway. This is a more realistic use case of setting variables that aren't defaulted in __init__.
I try to provide a case where you would do so for:
3.) But instance variables can also be defined/initialized outside the constructor, e.g. in the other methods of the same class.
I agree it would be clear and organized to include instance field in the constructor, but sometimes you are inherit other class, which is created by some other people and has many instance fields and api.
But if you inherit it only for certain apis and you want to have your own instance field for your own apis, in this case, it is easier for you to just declare extra instance field in the method instead override the other's constructor without bothering to deep into the source code. This also support Adam Hughes's answer, because in this case, you will always have your defined instance because you will guarantee to call you own api first.
For instance, suppose you inherit a package's handler class for web development, you want to include a new instance field called user for handler, you would probability just declare it directly in the method--initialize without override the constructor, I saw it is more common to do so.
class BlogHandler(webapp2.RequestHandler):
def initialize(self, *a, **kw):
webapp2.RequestHandler.initialize(self, *a, **kw)
uid = self.read_cookie('user_id') #get user_id by read cookie in the browser
self.user = User.by_id(int(uid)) #run query in data base find the user and return user
These are very open questions.
Python is a very "free" language in the sense that it tries to never restrict you from doing anything, even if it looks silly. This is why you can do completely useless things such as replacing a class with a boolean (Yes you can).
The behaviour that you mention follows that same logic: if you wish to add an attribute to an object (or to a function - yes you can, too) dynamically, anywhere, not necessarily in the constructor, well... you can.
But it is not because you can that you should. The main reason for initializing attributes in the constructor is readability, which is a prerequisite for maintenance. As Bryan Oakley explains in his answer, class fields are key to understand the code as their names and types often reveal the intent better than the methods.
That being said, there is now a way to separate attribute definition from constructor initialization: pyfields. I wrote this library to be able to define the "contract" of a class in terms of attributes, while not requiring initialization in the constructor. This allows you in particular to create "mix-in classes" where attributes and methods relying on these attributes are defined, but no constructor is provided.
See this other answer for an example and details.
i think to keep it simple and understandable, better to initialize the class variables in the class constructor, so they can be directly called without the necessity of compiling of a specific class method.
class Cat():
def __init__(self,Meow,Roar):
self.meow = Meow
self.roar = Roar
def meow_bigger(self):
return self.roar
def mix(self):
return self.meow+self.roar
c=Cat("Meow!","Roar!")
print(c.meow_bigger())
print(c.mix())
Output
Roar!
Roar!
Meow!Roar!
Is there any difference in the following two pieces of code? If not, is one preferred over the other? Why would we be allowed to create class attributes dynamically?
Snippet 1
class Test(object):
def setClassAttribute(self):
Test.classAttribute = "Class Attribute"
Test().setClassAttribute()
Snippet 2
class Test(object):
classAttribute = "Class Attribute"
Test()
First, setting a class attribute on an instance method is a weird thing to do. And ignoring the self parameter and going right to Test is another weird thing to do, unless you specifically want all subclasses to share a single value.*
* If you did specifically want all subclasses to share a single value, I'd make it a #staticmethod with no params (and set it on Test). But in that case it isn't even really being used as a class attribute, and might work better as a module global, with a free function to set it.
So, even if you wanted to go with the first version, I'd write it like this:
class Test(object):
#classmethod
def setClassAttribute(cls):
cls.classAttribute = "Class Attribute"
Test.setClassAttribute()
However, all that being said, I think the second is far more pythonic. Here are the considerations:
In general, getters and setters are strongly discouraged in Python.
The first one leaves a gap during which the class exists but has no attribute.
Simple is better than complex.
The one thing to keep in mind is that part of the reason getters and setters are unnecessary in Python is that you can always replace an attribute with a #property if you later need it to be computed, validated, etc. With a class attribute, that's not quite as perfect a solution—but it's usually good enough.
One last thing: class attributes (and class methods, except for alternate constructor) are often a sign of a non-pythonic design at a higher level. Not always, of course, but often enough that it's worth explaining out loud why you think you need a class attribute and making sure it makes sense. (And if you've ever programmed in a language whose idioms make extensive use of class attributes—especially if it's Java—go find someone who's never used Java and try to explain it to him.)
It's more natural to do it like #2, but notice that they do different things. With #2, the class always has the attribute. With #1, it won't have the attribute until you call setClassAttribute.
You asked, "Why would we be allowed to create class attributes dynamically?" With Python, the question often is not "why would we be allowed to", but "why should we be prevented?" A class is an object like any other, it has attributes. Objects (generally) can get new attributes at any time. There's no reason to make a class be an exception to that rule.
I think #2 feels more natural. #1's implementation means that the attribute doesn't get set until an actual instance of the class gets created, which to me seems counterintuitive to what a class attribute (vs. object attribute) should be.
I have several classes where I want to add a single property to each class (its md5 hash value) and calculate that hash value when initializing objects of that class, but otherwise maintain everything else about the class. Is there any more elegant way to do that in python than to create a subclass for all the classes where I want to change the initialization and add the property?
You can add properties and override __init__ dynamically:
def newinit(self, orig):
orig(self)
self._md5 = #calculate md5 here
_orig_init = A.__init__
A.__init__ = lambda self: newinit(self, _orig_init)
A.md5 = property(lambda self: self._md5)
However, this can get quite confusing, even once you use more descriptive names than I did above. So I don't really recommend it.
Cleaner would probably be to simply subclass, possibly using a mixin class if you need to do this for multiple classes. You could also consider creating the subclasses dynamically using type() to cut down on the boilerplate further, but clarity of code would be my first concern.
Say I have a class, which has a number of subclasses.
I can instantiate the class. I can then set its __class__ attribute to one of the subclasses. I have effectively changed the class type to the type of its subclass, on a live object. I can call methods on it which invoke the subclass's version of those methods.
So, how dangerous is doing this? It seems weird, but is it wrong to do such a thing? Despite the ability to change type at run-time, is this a feature of the language that should completely be avoided? Why or why not?
(Depending on responses, I'll post a more-specific question about what I would like to do, and if there are better alternatives).
Here's a list of things I can think of that make this dangerous, in rough order from worst to least bad:
It's likely to be confusing to someone reading or debugging your code.
You won't have gotten the right __init__ method, so you probably won't have all of the instance variables initialized properly (or even at all).
The differences between 2.x and 3.x are significant enough that it may be painful to port.
There are some edge cases with classmethods, hand-coded descriptors, hooks to the method resolution order, etc., and they're different between classic and new-style classes (and, again, between 2.x and 3.x).
If you use __slots__, all of the classes must have identical slots. (And if you have the compatible but different slots, it may appear to work at first but do horrible things…)
Special method definitions in new-style classes may not change. (In fact, this will work in practice with all current Python implementations, but it's not documented to work, so…)
If you use __new__, things will not work the way you naively expected.
If the classes have different metaclasses, things will get even more confusing.
Meanwhile, in many cases where you'd think this is necessary, there are better options:
Use a factory to create an instance of the appropriate class dynamically, instead of creating a base instance and then munging it into a derived one.
Use __new__ or other mechanisms to hook the construction.
Redesign things so you have a single class with some data-driven behavior, instead of abusing inheritance.
As a very most common specific case of the last one, just put all of the "variable methods" into classes whose instances are kept as a data member of the "parent", rather than into subclasses. Instead of changing self.__class__ = OtherSubclass, just do self.member = OtherSubclass(self). If you really need methods to magically change, automatic forwarding (e.g., via __getattr__) is a much more common and pythonic idiom than changing classes on the fly.
Assigning the __class__ attribute is useful if you have a long time running application and you need to replace an old version of some object by a newer version of the same class without loss of data, e.g. after some reload(mymodule) and without reload of unchanged modules. Other example is if you implement persistency - something similar to pickle.load.
All other usage is discouraged, especially if you can write the complete code before starting the application.
On arbitrary classes, this is extremely unlikely to work, and is very fragile even if it does. It's basically the same thing as pulling the underlying function objects out of the methods of one class, and calling them on objects which are not instances of the original class. Whether or not that will work depends on internal implementation details, and is a form of very tight coupling.
That said, changing the __class__ of objects amongst a set of classes that were particularly designed to be used this way could be perfectly fine. I've been aware that you can do this for a long time, but I've never yet found a use for this technique where a better solution didn't spring to mind at the same time. So if you think you have a use case, go for it. Just be clear in your comments/documentation what is going on. In particular it means that the implementation of all the classes involved have to respect all of their invariants/assumptions/etc, rather than being able to consider each class in isolation, so you'd want to make sure that anyone who works on any of the code involved is aware of this!
Well, not discounting the problems cautioned about at the start. But it can be useful in certain cases.
First of all, the reason I am looking this post up is because I did just this and __slots__ doesn't like it. (yes, my code is a valid use case for slots, this is pure memory optimization) and I was trying to get around a slots issue.
I first saw this in Alex Martelli's Python Cookbook (1st ed). In the 3rd ed, it's recipe 8.19 "Implementing Stateful Objects or State Machine Problems". A fairly knowledgeable source, Python-wise.
Suppose you have an ActiveEnemy object that has different behavior from an InactiveEnemy and you need to switch back and forth quickly between them. Maybe even a DeadEnemy.
If InactiveEnemy was a subclass or a sibling, you could switch class attributes. More exactly, the exact ancestry matters less than the methods and attributes being consistent to code calling it. Think Java interface or, as several people have mentioned, your classes need to be designed with this use in mind.
Now, you still have to manage state transition rules and all sorts of other things. And, yes, if your client code is not expecting this behavior and your instances switch behavior, things will hit the fan.
But I've used this quite successfully on Python 2.x and never had any unusual problems with it. Best done with a common parent and small behavioral differences on subclasses with the same method signatures.
No problems, until my __slots__ issue that's blocking it just now. But slots are a pain in the neck in general.
I would not do this to patch live code. I would also privilege using a factory method to create instances.
But to manage very specific conditions known in advance? Like a state machine that the clients are expected to understand thoroughly? Then it is pretty darn close to magic, with all the risk that comes with it. It's quite elegant.
Python 3 concerns? Test it to see if it works but the Cookbook uses Python 3 print(x) syntax in its example, FWIW.
The other answers have done a good job of discussing the question of why just changing __class__ is likely not an optimal decision.
Below is one example of a way to avoid changing __class__ after instance creation, using __new__. I'm not recommending it, just showing how it could be done, for the sake of completeness. However it is probably best to do this using a boring old factory rather than shoe-horning inheritance into a job for which it was not intended.
class ChildDispatcher:
_subclasses = dict()
def __new__(cls, *args, dispatch_arg, **kwargs):
# dispatch to a registered child class
subcls = cls.getsubcls(dispatch_arg)
return super(ChildDispatcher, subcls).__new__(subcls)
def __init_subclass__(subcls, **kwargs):
super(ChildDispatcher, subcls).__init_subclass__(**kwargs)
# add __new__ contructor to child class based on default first dispatch argument
def __new__(cls, *args, dispatch_arg = subcls.__qualname__, **kwargs):
return super(ChildDispatcher,cls).__new__(cls, *args, **kwargs)
subcls.__new__ = __new__
ChildDispatcher.register_subclass(subcls)
#classmethod
def getsubcls(cls, key):
name = cls.__qualname__
if cls is not ChildDispatcher:
raise AttributeError(f"type object {name!r} has no attribute 'getsubcls'")
try:
return ChildDispatcher._subclasses[key]
except KeyError:
raise KeyError(f"No child class key {key!r} in the "
f"{cls.__qualname__} subclasses registry")
#classmethod
def register_subclass(cls, subcls):
name = subcls.__qualname__
if cls is not ChildDispatcher:
raise AttributeError(f"type object {name!r} has no attribute "
f"'register_subclass'")
if name not in ChildDispatcher._subclasses:
ChildDispatcher._subclasses[name] = subcls
else:
raise KeyError(f"{name} subclass already exists")
class Child(ChildDispatcher): pass
c1 = ChildDispatcher(dispatch_arg = "Child")
assert isinstance(c1, Child)
c2 = Child()
assert isinstance(c2, Child)
How "dangerous" it is depends primarily on what the subclass would have done when initializing the object. It's entirely possible that it would not be properly initialized, having only run the base class's __init__(), and something would fail later because of, say, an uninitialized instance attribute.
Even without that, it seems like bad practice for most use cases. Easier to just instantiate the desired class in the first place.
Here's an example of one way you could do the same thing without changing __class__. Quoting #unutbu in the comments to the question:
Suppose you were modeling cellular automata. Suppose each cell could be in one of say 5 Stages. You could define 5 classes Stage1, Stage2, etc. Suppose each Stage class has multiple methods.
class Stage1(object):
…
class Stage2(object):
…
…
class Cell(object):
def __init__(self):
self.current_stage = Stage1()
def goToStage2(self):
self.current_stage = Stage2()
def __getattr__(self, attr):
return getattr(self.current_stage, attr)
If you allow changing __class__ you could instantly give a cell all the methods of a new stage (same names, but different behavior).
Same for changing current_stage, but this is a perfectly normal and pythonic thing to do, that won't confuse anyone.
Plus, it allows you to not change certain special methods you don't want changed, just by overriding them in Cell.
Plus, it works for data members, class methods, static methods, etc., in ways every intermediate Python programmer already understands.
If you refuse to change __class__, then you might have to include a stage attribute, and use a lot of if statements, or reassign a lot of attributes pointing to different stage's functions
Yes, I've used a stage attribute, but that's not a downside—it's the obvious visible way to keep track of what the current stage is, better for debugging and for readability.
And there's not a single if statement or any attribute reassignment except for the stage attribute.
And this is just one of multiple different ways of doing this without changing __class__.
In the comments I proposed modeling cellular automata as a possible use case for dynamic __class__s. Let's try to flesh out the idea a bit:
Using dynamic __class__:
class Stage(object):
def __init__(self, x, y):
self.x = x
self.y = y
class Stage1(Stage):
def step(self):
if ...:
self.__class__ = Stage2
class Stage2(Stage):
def step(self):
if ...:
self.__class__ = Stage3
cells = [Stage1(x,y) for x in range(rows) for y in range(cols)]
def step(cells):
for cell in cells:
cell.step()
yield cells
For lack of a better term, I'm going to call this
The traditional way: (mainly abarnert's code)
class Stage1(object):
def step(self, cell):
...
if ...:
cell.goToStage2()
class Stage2(object):
def step(self, cell):
...
if ...:
cell.goToStage3()
class Cell(object):
def __init__(self, x, y):
self.x = x
self.y = y
self.current_stage = Stage1()
def goToStage2(self):
self.current_stage = Stage2()
def __getattr__(self, attr):
return getattr(self.current_stage, attr)
cells = [Cell(x,y) for x in range(rows) for y in range(cols)]
def step(cells):
for cell in cells:
cell.step(cell)
yield cells
Comparison:
The traditional way creates a list of Cell instances each with a
current stage attribute.
The dynamic __class__ way creates a list of instances which are
subclasses of Stage. There is no need for a current stage
attribute since __class__ already serves this purpose.
The traditional way uses goToStage2, goToStage3, ... methods to
switch stages.
The dynamic __class__ way requires no such methods. You just
reassign __class__.
The traditional way uses the special method __getattr__ to delegate
some method calls to the appropriate stage instance held in the
self.current_stage attribute.
The dynamic __class__ way does not require any such delegation. The
instances in cells are already the objects you want.
The traditional way needs to pass the cell as an argument to
Stage.step. This is so cell.goToStageN can be called.
The dynamic __class__ way does not need to pass anything. The
object we are dealing with has everything we need.
Conclusion:
Both ways can be made to work. To the extent that I can envision how these two implementations would pan-out, it seems to me the dynamic __class__ implementation will be
simpler (no Cell class),
more elegant (no ugly goToStage2 methods, no brain-teasers like why
you need to write cell.step(cell) instead of cell.step()),
and easier to understand (no __getattr__, no additional level of
indirection)