I have several classes where I want to add a single property to each class (its md5 hash value) and calculate that hash value when initializing objects of that class, but otherwise maintain everything else about the class. Is there any more elegant way to do that in python than to create a subclass for all the classes where I want to change the initialization and add the property?
You can add properties and override __init__ dynamically:
def newinit(self, orig):
orig(self)
self._md5 = #calculate md5 here
_orig_init = A.__init__
A.__init__ = lambda self: newinit(self, _orig_init)
A.md5 = property(lambda self: self._md5)
However, this can get quite confusing, even once you use more descriptive names than I did above. So I don't really recommend it.
Cleaner would probably be to simply subclass, possibly using a mixin class if you need to do this for multiple classes. You could also consider creating the subclasses dynamically using type() to cut down on the boilerplate further, but clarity of code would be my first concern.
Related
I have a class
class A:
def sample_method():
I would like to decorate class A sample_method() and override the contents of sample_method()
class DecoratedA(A):
def sample_method():
The setup above resembles inheritance, but I need to keep the preexisting instance of class A when the decorated function is used.
a # preexisting instance of class A
decorated_a = DecoratedA(a)
decorated_a.functionInClassA() #functions in Class A called as usual with preexisting instance
decorated_a.sample_method() #should call the overwritten sample_method() defined in DecoratedA
What is the proper way to go about this?
There isn't a straightforward way to do what you're asking. Generally, after an instance has been created, it's too late to mess with the methods its class defines.
There are two options you have, as far as I see it. Either you create a wrapper or proxy object for your pre-existing instance, or you modify the instance to change its behavior.
A proxy defers most behavior to the object itself, while only adding (or overriding) some limited behavior of its own:
class Proxy:
def __init__(self, obj):
self.obj = obj
def overridden_method(self): # add your own limited behavior for a few things
do_stuff()
def __getattr__(self, name): # and hand everything else off to the other object
return getattr(self.obj, name)
__getattr__ isn't perfect here, it can only work for regular methods, not special __dunder__ methods that are often looked up directly in the class itself. If you want your proxy to match all possible behavior, you probably need to add things like __add__ and __getitem__, but that might not be necessary in your specific situation (it depends on what A does).
As for changing the behavior of the existing object, one approach is to write your subclass, and then change the existing object's class to be the subclass. This is a little sketchy, since you won't have ever initialized the object as the new class, but it might work if you're only modifying method behavior.
class ModifiedA(A):
def overridden_method(self): # do the override in a normal subclass
do_stuff()
def modify_obj(obj): # then change an existing object's type in place!
obj.__class__ = ModifiedA # this is not terribly safe, but it can work
You could also consider adding an instance variable that would shadow the method you want to override, rather than modifying __class__. Writing the function could be a little tricky, since it won't get bound to the object automatically when called (that only happens for functions that are attributes of a class, not attributes of an instance), but you could probably do the binding yourself (with partial or lambda if you need to access self.
First, why not just define it from the beginning, how you want it, instead of decorating it?
Second, why not decorate the method itself?
To answer the question:
You can reassign it
class A:
def sample_method(): ...
pass
A.sample_method = DecoratedA.sample_method;
but that affects every instance.
Another solution is to reassign the method for just one object.
import functools;
a.sample_method = functools.partial(DecoratedA.sample_method, a);
Another solution is to (temporarily) change the type of an existing object.
a = A();
a.__class__ = DecoratedA;
a.sample_method();
a.__class__ = A;
I’m building a class that extends the list data structure in Python, called a Partitional. I’m adding a few methods that I find myself using frequently when dividing a list into partitions.
The class is initialized with a (nullable) list, which exists as an attribute on the class.
class Partitional(list):
"""Extends the list data type. Adds methods for dividing a list into partition sets
and returning data about those partition sets"""
def __init__(self, source_list: list=[]):
super().__init__()
self.source_list: list = source_list
self.n: int = len(source_list)
...
I want to be able to reliably replace list instances with Partitional instances without violating Liskov substitution. So for list’s methods, I wrote methods on the Partitional class that operate on self.source_list, e.g.
...
def remove(self, matched_item):
self.source_list.remove(matched_item)
self.__init__(self.source_list)
def pop(self, *args):
popped_item = self.source_list.pop(*args)
self.__init__(self.source_list)
return popped_item
def clear(self):
self.source_list.clear()
self.__init__(self.source_list)
...
(the __init__ call is there because the Partitional class builds some internal attributes based on self.source_list when it’s initialized, so these need to be rebuilt if source_list changes.)
And I also want Python’s built-in methods that take a list as an argument to work with a Partitional instance, so I set to work writing method overrides for those as well, e.g.
...
def __len__(self):
return len(self.source_list)
def __enumerate__(self):
return enumerate(self.source_list)
...
The relevant built-in methods are a finite set for any given Python version, but... is there not a simpler way to do this?
My question:
Is there a way to write a class such that, if an instance of that class is used as the argument for a function, the class provides an attribute to the function instead, by default?
That way I’d only need to override this default behaviour for a subset of built-in methods.
So for example, if a use case involving a list instance looks like this:
example_list: list = [1,2,3,4,5]
length = len(example_list)
we substitute a Partitional instance built from the same list:
example_list: list = [1,2,3,4,5]
example_partitional = Partitional(example_list)
length = len(example_partitional)
and what’s “actually” happening is this:
length = len(example_partitional.source_list)
i.e.
length = len([1,2,3,4,5])
Other notes:
In working on this, I’ve realized that there are two broad categories of Liskov substitution violation possible:
Inherent violation, where the structure of the child class will make it incompatible with any use case where the child class is used in place of the parent class, e.g. if you override some fundamental property or structure of the parent.
Context-dependent violation, where, for any given piece of software, so long as you never use the child class in a way that would violate Liskov substitution, you’re fine. E.g. You override a method on the parent class that would change how a built-in function acts when it takes an instance of the class as an argument, but you never use that built-in method with the class instance in your system. Or any system that depends on your system. Or... (you see how relying on this caveat is not foolproof)
What I’m looking to do is come up with a technique that will protect against both categories of violation, without having to worry about use cases and context.
I want to create a configuration class with cascading feature. What do I mean by this? let say we have a configuration class like this
class BaseConfig(metaclass=ConfigMeta, ...):
def getattr():
return 'default values provided by the metaclass'
class Config(BaseConfig):
class Embedding(BaseConfig, size=200):
class WordEmbedding(Embedding):
size = 300
when I use this in code I will access the configuration as follows,
def function(Config, blah, blah):
word_embedding_size = Config.Embedding.Word.size
char_embedding_size = Config.Embedding.Char.size
The last line access a property which does not exist in Embedding class 'Char'. That should invoke getattr() which should return 200 in this case. I am not familiar with metaclasses enough to make a good judgement, but I gues I need to define the __new__() of the metaclass.
does this approach makes sense or is there a better way to do it?
EDIT:
class Config(BaseConfig):
class Embedding(BaseConfig, size=200):
class WordEmbedding(Embedding):
size = 300
class Log(BaseConfig, level=logging.DEBUG):
class PREPROCESS(Log):
level = logging.INFO
#When I use
log = logging.getLogger(level=Config.Log.Model.level) #level should be INFO
This is a bit confuse. I am not sure if this would be the best notation to declare configurations with default parameters - it seems verbose. But yes, given the flexibility of metaclasses and magic methods in Python, it is possible for something like this to old all flexibility you need.
Just for the sake of it, I'd like to say that using nested classes as namespaces, like you are doing, is probably the only useful thing for them. (nested classes). It is common to see a lot of people that misunderstands Python OO at all trying to make use of nested classes.
So - for your problem, you need that in the final class, a __getattr__ method exists that can fetch default values for atributes. These attributes in turn are declared as keywords to nested classes - which also can have the same metaclass. Otherwise, the hierarchy of nested classes just work for you to fetch nested attributes, using the dot notation in Python.
Moreover, for each class in a nested set, one can pass in keyword parameters that are to be used as default, if the next level of nested classes is not defined. In the given example, trying to access Config.Embedding.Char.size with a non exisitng Char should return the default "size". Not that a __getattr__ in "Embedding" can return you a fake "Char" object - but that object is the one that have to yield a size attribute. So, our __getattr__ have yet to yield an object that has itself a propper __getattr__;
However, I will suggest a change to your requirements - instead of passing in the default values as keyword parameters, to have a reserved name - like _default inside which you can put your default attributes. That way, you can provide deeply nested default subtress, instead of just scalar values as well, and the implementation can possibly be simpler.
Actually - a lot simpler. By using keywords to the class as you propose, you'd actually need to have a metaclass set those default parameters in a data structure(it would be possible in either __new__ or __init__ though). But by just using the nested classes all the way, with a reserved name, a custom __getattr__ on the metac class will work. That will retrieve unexisting class attributes on the configuration classes themselves, and all one have to do, if a requested attribute does not exist, is try to retrieve the _default class I mentioned.
Thus, you can work with something like:
class ConfigMeta(type):
def __getattr__(cls, attr):
return cls._default
class Base(metaclass=ConfigMeta):
pass
class Config(Base):
class Embed(Base):
class _default(Base):
size = 200
class Word(Base):
size = 300
assert Config.Embed.Char.size == 200
assert Config.Embed.Word.size == 300
Btw - just last year I was working on a project to have configurations like this, with default values, but using a dictionary syntax - that is why I mentioned I am not sure the nested class would be a nice design. But since all the functionality can be provided by a metaclass with 3 LoC I guess this beats anything in the way.
Also, that is why I think being able to nest whole default subtrees can be useful for what you want - I've been there.
You can use a metaclass to set the attribute:
class ConfigMeta(type):
def __new__(mt, clsn, bases, attrs):
try:
_ = attrs['size']
except KeyError:
attrs['size'] = 300
return super().__new__(mt, clsn, bases, attrs)
Now if the class does not have the size attribute, it would be set to 300 (change this to meet your need).
I've a following problem. I have a model class in MVC and it has a special purpose. In certain cases it should be able to override itself. Is this kind of behavior possible?
Class Text(Document):
a = StringField()
b = StringField()
def save(self):
if 1==Text.object(a=self.a).count(): # if similar object exists in db,
self = Text.object(a=self.a).first() # get the instance from db and
# override the origian class.
else: #use super class' save-function
return super(Text, self).save()
There's no trivial way for an object to become another object in python. Assigning to self won't do this; self is a local variable in the method definition, And assigning to it won't change the existing instance in any way; only make it inaccessible for the rest of the method.
There are a few ways to approach this problem. The preferred way is to have a method that returns the correct instance.
class Foo(...):
def get_or_save(self):
existing = load_from_database(self.bar)
if existing is not None:
return existing
else:
save_to_database(self)
return self
new_inst = Text()
new_inst.bar = "baz"
inst = new_inst.get_or_save()
# stop using new_inst
There is also a hackish way to get a similar effect to your original example. Ordinary python classes store most of their attributes in a special __dict__ attribute. You can copy that and it will be as though one instance is replaced by the other. Of course, that only works for perfectly plain python classes, and may or may not work classes defined in an ORM, or which retain state in more clever ways.
class Foo(...):
def save(self):
existing = load_from_database(self.bar)
if existing is not None:
self.__dict__ = existing.__dict__
else:
save_to_database(self)
Yes it is possible :-)
Seriously, using a conditional call to super as in your example will achieve the result.
However, the style of your example is a little confusing, and changing it may allow you to achieve your overall objectives more easily. (But neither of these directly affects your question.)
I would not recommend putting a method in your class called object unless I had no other choice.
The fact that you are passing self.a to Text.object, within method Text.save, doesn't seem right. It would be cleaner to simply call self.object() and have method object use self.a directly in its code.
I have been messing around with pygame and python and I want to be able to call a function when an attribute of my class has changed. My current solution being:
class ExampleClass(parentClass):
def __init__(self):
self.rect = pygame.rect.Rect(0,0,100,100)
def __setattr__(self, name, value):
parentClass.__setattr__(self,name,value)
dofancystuff()
Firstclass = ExampleClass()
This works fine and dofancystuff is called when I change the rect value with Firsclass.rect = pygame.rect.Rect(0,0,100,100). However if I say Firstclass.rect.bottom = 3. __setattr__ and there for dofancystuff is not called.
So my question I guess is how can I intercept any change to an attribute of a subclass?
edit: Also If I am going about this the wrong way please do tell I'm not very knowledgable when it comes to python.
Well, the simple answer is you can't. In the case of Firstclass.rect = <...> your __setattr__ is called. But in the case of Firstclass.rect.bottom = 3 the __setattr__ method of the Rect instance is called. The only solution I see, is to create a derived class of pygame.rect.Rect where you overwrite the __setattr__ method. You can also monkey patch the Rect class, but this is discouraged for good reasons.
You could try __getattr__, which should be called on Firstclass.rect.
But do this instead: Create a separate class (subclass of pygame.rect?) for ExampleClass.rect. Implement __setattr__ in that class. Now you will be told about anything that gets set in your rect member for ExampleClass. You can still implement __setattr__ in ExampleClass (and should), only now make sure you instantiate a version of your own rect class...
BTW: Don't call your objects Firstclass, as then it looks like a class as opposed to an object...
This isn't answering the question but it is relevant:
self.__dict__[name] = value
is probably better than
parentClass.__setattr__(self, name, value)
This is recommended by the Python documentation (setattr">http://docs.python.org/2/reference/datamodel.html?highlight=setattr#object.setattr) and makes more sense anyway in the general case since it does not assume anything about the behaviour of parentClass setattr.
Yay for unsolicited advice four years too late!
I think the reason why you have this difficulty deserves a little more information than is provided by the other answers.
The problem is, when you do:
myObject.attribute.thing = value
You're not assigning a value to attribute. The code is equivalent to this:
anAttribute = myObject.attribute
anAttribute.thing = value
As it's seen by myObject, all you're doing it getting the attribute; you're not setting the attribute.
Making subclasses of your attributes that you control, and can define __setattr__ for is one solution.
An alternative solution, that may make sense if you have lots of attributes of different types and don't want to make lots of individual subclasses for all of them, is to override __getattribute__ or __getattr__ to return a facade to the attribute that performs the relevant operations in its __setattr__ method. I've not attempted to do this myself, but I imagine that you should be able to make a simple facade class that will act as a facade for any object.
Care would need to be taken in the choice of __getattribute__ and __getattr__. See the documentation linked in the previous sentence for information, but basically if __getattr__ is used, the actual attributes will have top be encapsulated/obfuscated somehow so that __getattr__ handles requests for them, and if __getattribute__ is used, it'll have to retrieve attributes via calls to a base class.
If all you're trying to do is determine if some rects have been updated, then this is overkill.