Place methods of a subobject in my class - python

I know that what I'm doing is probably not the best way to do it, but right now I can't think of another way.
What I basically have is this:
class foo:
def __init__(self):
self.bar = ['bundy']
Currently, I'm defining a lot of methods for my class to return the result of the method on the list, like this:
def __len__(self):
return len(self.bar)
Of course, there are also other methods and objects which are not to do with bar - I'm not reinventing the list
Is there an easier way to 'copy' the methods, so that I don't have to define them all one by one?

You have to define some methods one by one, like you are doing.
However, there is a base-class in Python, other than list, that gives you a well defined way of which methods you need to supply, and defines the remaining methods that can be based on this minimum set.
These are the provided "abstract base classes" - what you want is to implement your object as a "Mutable Sequence" - them you only have to implement __getitem__, __setitem__, __delitem__, __len__, insert to have the full list functionality.
In python 3.x, just inherit your class from collections.abc.MutableSequence and implement those. (In Python 2.7 it is collection.MutableSequence instead.).
By doing this, you will get for free __contains__, __iter__, __reversed__, index, count, append, reverse, extend, pop, remove and __iadd__ methods.

Related

Python: use special methods on a class (not necessarily its instances)

(Edited to frame it as a question as requested by the guidelines)
I am implementing this class to manage a library of prime numbers, the instances are supposed to work as a range object, but on primes only. I also want an easy way to iterate over all primes, check if a number was prime and call the nth prime as if it was a list, something like:
class primes: ...
for p in primes: ...
if p in primes: ...
p = primes[2] # = 5
Is there a general way to apply magic methods to classes themselves instead of their instances?
What do you think about the whole thing?
Not bad.
It coukld have taken other approaches, but this is as nice as many others.
Just as a remark: you don't need the __metaclass__ attribute to be present on the created classes - that was an artifact on Python 2. One can use type(cls) to get to the metaclass (as you already do).
I tried to think of a way of staying with a single metaclass, but that, as you found out, is not possible: the correctly named "dunder" methods have to be present in the metaclass itself, just as you did there.
If you'd wish, this could be made a metaclass itself, instead of a decorator - the same logic would apply in the __new__ method before actually instantiating the class: that could avoid having a duplicate class without the "lifted" capabilities (the one you point to as __wrapped__), and could be made in a way as to be inherited by the subclasses - so no need to apply a decorator to subclasses. But then, that would require some care: when creating a new class you have to persist the "lift dunder methods" behavior and not just the "serve the dunder methods" of the dynamic metaclass.
And, instead of having an ordinary metaclass and instrument its __new__ method to create a dynamic metaclass and return its instance, you could instrument the __call__ method of a metaclass, and have effectively a "meta-meta-class". It would be even harder to understand, and with no gains, though - the decorator approach might be better in all cases.
Idea.
I thought I could use a metaclass to give the class its dunder methods as an instance of the metaclass. Since those methods were class-specific I thought of something like the following:
class Meta(type):
def __iter__(cls):
return cls.__cls_iter__()
class Klass(metaclass=Meta):
#classmethod
def __cls_iter__(cls):
...
It worked fine and making it into a decorator was easy, but it was still too specific and implementing twice every method would have been repetitive.
Last Iteration.
I decided for a decorator that creates a different metaclass for every class it is used on, it creates only the __*__ methods that are required and it also takes care of making __cls_*__ methods as classmethod:
def dundered(my_cls):
def lift_method(cls_method_name):
def lifted_method(cls, *args, **kwargs):
return getattr(cls, cls_method_name)(*args, **kwargs)
lifted_method.__name__ = f"__{cls_method_name[6:]}"
return lifted_method
meta_dict = dict()
for key, val in dict(my_cls.__dict__).items():
if key == f"__cls_{key[6:-2]}__":
if not isinstance(val, classmethod):
setattr(my_cls, key, classmethod(val))
meta_dict[f"__{key[6:]}"] = lift_method(key)
return type(f"{type(my_cls).__name__}_dunderable", (type(my_cls),), meta_dict)(
my_cls.__name__ + "__", (my_cls,), {"__wrapped__": my_cls}
)
This is very easy to use and it seems to be working fine with any dunder method:
#dundered
class Klass:
def __iter__(self):
# iters on instances
...
def __cls_iter__(cls):
# iters on the class
...
for _ in Klass(...): ... # calls __iter__
for _ in Klass: ... # calls __cls_iter__

How to extend the list data structure in Python without violating Liskov substitution - supply an attribute instead of an instance?

I’m building a class that extends the list data structure in Python, called a Partitional. I’m adding a few methods that I find myself using frequently when dividing a list into partitions.
The class is initialized with a (nullable) list, which exists as an attribute on the class.
class Partitional(list):
"""Extends the list data type. Adds methods for dividing a list into partition sets
and returning data about those partition sets"""
def __init__(self, source_list: list=[]):
super().__init__()
self.source_list: list = source_list
self.n: int = len(source_list)
...
I want to be able to reliably replace list instances with Partitional instances without violating Liskov substitution. So for list’s methods, I wrote methods on the Partitional class that operate on self.source_list, e.g.
...
def remove(self, matched_item):
self.source_list.remove(matched_item)
self.__init__(self.source_list)
def pop(self, *args):
popped_item = self.source_list.pop(*args)
self.__init__(self.source_list)
return popped_item
def clear(self):
self.source_list.clear()
self.__init__(self.source_list)
...
(the __init__ call is there because the Partitional class builds some internal attributes based on self.source_list when it’s initialized, so these need to be rebuilt if source_list changes.)
And I also want Python’s built-in methods that take a list as an argument to work with a Partitional instance, so I set to work writing method overrides for those as well, e.g.
...
def __len__(self):
return len(self.source_list)
def __enumerate__(self):
return enumerate(self.source_list)
...
The relevant built-in methods are a finite set for any given Python version, but... is there not a simpler way to do this?
My question:
Is there a way to write a class such that, if an instance of that class is used as the argument for a function, the class provides an attribute to the function instead, by default?
That way I’d only need to override this default behaviour for a subset of built-in methods.
So for example, if a use case involving a list instance looks like this:
example_list: list = [1,2,3,4,5]
length = len(example_list)
we substitute a Partitional instance built from the same list:
example_list: list = [1,2,3,4,5]
example_partitional = Partitional(example_list)
length = len(example_partitional)
and what’s “actually” happening is this:
length = len(example_partitional.source_list)
i.e.
length = len([1,2,3,4,5])
Other notes:
In working on this, I’ve realized that there are two broad categories of Liskov substitution violation possible:
Inherent violation, where the structure of the child class will make it incompatible with any use case where the child class is used in place of the parent class, e.g. if you override some fundamental property or structure of the parent.
Context-dependent violation, where, for any given piece of software, so long as you never use the child class in a way that would violate Liskov substitution, you’re fine. E.g. You override a method on the parent class that would change how a built-in function acts when it takes an instance of the class as an argument, but you never use that built-in method with the class instance in your system. Or any system that depends on your system. Or... (you see how relying on this caveat is not foolproof)
What I’m looking to do is come up with a technique that will protect against both categories of violation, without having to worry about use cases and context.

Pythonic accessors / mutators for "internal" lists

I'm aware that attribute getters and setters are considered "unpythonic", and the pythonic way to do things is to simply use an normal attribute and use the property decorator if you later need to trigger some functionality when an attribute is accessed or set.
e.g. What's the pythonic way to use getters and setters?
But how does this apply when the value of an attribute is a list, for example?
class AnimalShelter(object):
def __init__(self):
dogs = []
cats = []
class Cat(object):
pass
class Dog(object):
pass
Say that initially, the interface works like this:
# Create a new animal shelter
woodgreen = AnimalShelter()
# Add some animals to the shelter
dog1 = Dog()
woodgreen.dogs.append(dog1)
This would seem to be in line with the "pythonic" idea of just using straightforward attributes rather than creating getters, setters, mutators etc. I could have created an addDog method instead. But while not strictly speaking a setter (since it mutates the value of an attribute rather than setting an attribute), it still seems setter-like compared to my above solution.
But then, say that later on you need to trigger some functionality when dogs are added. You can't fall back on the using the property decorator, since adding a dog is not setting a property on the object, but retrieving a list which is the value of that attribute, and mutating that list.
What would be the "pythonic" way of dealing with such a situation?
What's unpythonic are useless getters and setters - since Python have a strong support for computed attributes. This doesn't mean you shouldn't properly encapsulate your implementation.
In your above exemple, the way your AnimalShelter class handles it's "owned" animals is an implementation detail and should not be exposed, so it's totally pythonic to use protected attribute and expose a relevant set of public methods / properties:
class AnimalShelter(object):
def __init__(self):
self._dogs = []
self._cats = []
def add_dog(self, dog):
if dog not in self._dogs:
self._dogs.append(dog)
def get_dogs(self):
return self._dogs[:] # return a shallow copy
# etc

Python subclassing: adding properties

I have several classes where I want to add a single property to each class (its md5 hash value) and calculate that hash value when initializing objects of that class, but otherwise maintain everything else about the class. Is there any more elegant way to do that in python than to create a subclass for all the classes where I want to change the initialization and add the property?
You can add properties and override __init__ dynamically:
def newinit(self, orig):
orig(self)
self._md5 = #calculate md5 here
_orig_init = A.__init__
A.__init__ = lambda self: newinit(self, _orig_init)
A.md5 = property(lambda self: self._md5)
However, this can get quite confusing, even once you use more descriptive names than I did above. So I don't really recommend it.
Cleaner would probably be to simply subclass, possibly using a mixin class if you need to do this for multiple classes. You could also consider creating the subclasses dynamically using type() to cut down on the boilerplate further, but clarity of code would be my first concern.

Python idiom for dict-able classes?

I want to do something like this:
class Dictable:
def dict(self):
raise NotImplementedError
class Foo(Dictable):
def dict(self):
return {'bar1': self.bar1, 'bar2': self.bar2}
Is there a more pythonic way to do this? For example, is it possible to overload the built-in conversion dict(...)? Note that I don't necessarily want to return all the member variables of Foo, I'd rather have each class decide what to return.
Thanks.
The Pythonic way depends on what you want to do. If your objects shouldn't be regarded as mappings in their own right, then a dict method is perfectly fine, but you shouldn't "overload" dict to handle dictables. Whether or not you need the base class depends on whether you want to do isinstance(x, Dictable); note that hasattr(x, "dict") would serve pretty much the same purpose.
If the classes are conceptually mappings of keys to values, then implementing the Mapping protocol seems appropriate. I.e., you'd implement
__getitem__
__iter__
__len__
and inherit from collections.Mapping to get the other methods. Then you get dict(Foo()) for free. Example:
class Foo(Mapping):
def __getitem__(self, key):
if key not in ("bar1", "bar2"):
raise KeyError("{} not found".format(repr(key))
return getattr(self, key)
def __iter__(self):
yield "bar1"
yield "bar2"
def __len__(self):
return 2
Firstly, look at collections.ABC, which describes the Python abstract base class protocol (equivalent to interfaces in static languages).
Then, decide if you want to write your own ABC or make use of an existing one; in this case, Mapping might be what you want.
Note that although the dict constructor (i.e. dict(my_object)) is not overrideable, if it encounters an iterable object that yields a sequence of key-value pairs, it will construct a dict from that; i.e. (Python 2; for Python 3 replace items with iteritems):
def __iter__(self):
return {'bar1': self.bar1, 'bar2': self.bar2}.iteritems()
However, if your classes are intended to behave like a dict you shouldn't do this as it's different from the expected behaviour of a Mapping instance, which is to iterate over keys, not key-value pairs. In particular it would cause for .. in to behave incorrectly.
Most of the answers here are about making your class behave like a dict, which isn't actually what you asked. If you want to express the idea, "I am a class that can be turned into a dict," I would simply define a bunch of classes and have them each implement .dict(). Python favors duck-typing (what an object can do) over what an object is. The ABC doesn't add much. Documentation serves the same purpose.
You can certainly overload dict() but you almost never want to! Too many aspects of the standard library depend upon dict being available and you will break most of its functionality. You cab probably do something like this though:
class Dictable:
def dict(self):
return self.__dict__

Categories