How to use implementation inheritance in Python, that is to say public attributes x and protected attributes _x of the implementation inherited base classes becoming private attributes __x of the derived class?
In other words, in the derived class:
accessing the public attribute x or protected attribute _x should look up x or _x respectively like usual, except it should skip the implementation inherited base classes;
accessing the private attribute __x should look up __x like usual, except it should look up x and _x instead of __x for the implementation inherited base classes.
In C++, implementation inheritance is achieved by using the private access specifier in the base class declarations of a derived class, while the more common interface inheritance is achieved by using the public access specifier:
class A: public B, private C, private D, public E { /* class body */ };
For instance, implementation inheritance is needed to implement the class Adapter design pattern which relies on class inheritance (not to be confused with the object Adapter design pattern which relies on object composition) and consists in converting the interface of an Adaptee class into the interface of a Target abstract class by using an Adapter class that inherits both the interface of the Target abstract class and the implementation of the Adaptee class (cf. the Design Patterns book by Erich Gamma et al.):
Here is a Python program specifying what is intended, based on the above class diagram:
import abc
class Target(abc.ABC):
#abc.abstractmethod
def request(self):
raise NotImplementedError
class Adaptee:
def __init__(self):
self.state = "foo"
def specific_request(self):
return "bar"
class Adapter(Target, private(Adaptee)):
def request(self):
# Should access self.__state and Adaptee.specific_request(self)
return self.__state + self.__specific_request()
a = Adapter()
# Test 1: the implementation of Adaptee should be inherited
try:
assert a.request() == "foobar"
except AttributeError:
assert False
# Test 2: the interface of Adaptee should NOT be inherited
try:
a.specific_request()
except AttributeError:
pass
else:
assert False
You don't want to do this. Python is not C++, nor is C++ Python. How classes are implemented is completely different and so will lead to different design patterns. You do not need to use the class adapter pattern in Python, nor do you want to.
The only practical way to implement the adapter pattern in Python is either by using composition, or by subclassing the Adaptee without hiding that you did so.
I say practical here because there are ways to sort of make it work, but this path would take a lot of work to implement and is likely to introduce hard to track down bugs, and would make debugging and code maintenance much, much harder. Forget about 'is it possible', you need to worry about 'why would anyone ever want to do this'.
I'll try to explain why.
I'll also tell you how the impractical approaches might work. I'm not actually going to implement these, because that's way too much work for no gain, and I simply don't want to spend any time on that.
But first we have to clear several misconceptions here. There are some very fundamental gaps in your understanding of Python and how it's model differs from the C++ model: how privacy is handled, and compilation and execution philosophies, so lets start with those:
Privacy models
First of all, you can't apply C++'s privacy model to Python, because Python has no encapsulation privacy. At all. You need to let go of this idea, entirely.
Names starting with a single underscore are not actually private, not in the way C++ privacy works. Nor are they 'protected'. Using an underscore is just a convention, Python does not enforce access control. Any code can access any attribute on instances or classes, whatever naming convention was used. Instead, when you see a name that start with an underscore you can assume that the name is not part of the conventions of a public interface, that is, that these names can be changed without notice or consideration for backwards compatibility.
Quoting from the Python tutorial section on the subject:
“Private” instance variables that cannot be accessed except from inside an object don’t exist in Python. However, there is a convention that is followed by most Python code: a name prefixed with an underscore (e.g. _spam) should be treated as a non-public part of the API (whether it is a function, a method or a data member). It should be considered an implementation detail and subject to change without notice.
It's a good convention, but not even something you can rely on, consistently. E.g. the collections.namedtuple() class generator generates a class with 5 different methods and attributes that all start with an underscore but are all meant to be public, because the alternative would be to place arbitrary restrictions on what attribute names you can give the contained elements, and making it incredibly hard to add additional methods in future Python versions without breaking a lot of code.
Names starting with two underscores (and none at the end), are not private either, not in a class encapsulation sense such as the C++ model. They are class-private names, these names are re-written at compile time to produce a per-class namespace, to avoid collisions.
In other words, they are used to avoid a problem very similar to the namedtuple issue described above: to remove limits on what names a subclass can use. If you ever need to design base classes for use in a framework, where subclasses should have the freedom to name methods and attributes without limit, that's where you use __name class-private names. The Python compiler will rewrite __attribute_name to _ClassName__attribute_name when used inside a class statement as well as in any functions that are being defined inside a class statement.
Note that C++ doesn't use names to indicate privacy. Instead, privacy is a property of each identifier, within a given namespace, as processed by the compiler. The compiler enforces access control; private names are not accessible and will lead to compilation errors.
Without a privacy model, your requirement where "public attributes x and protected attributes _x of the implementation inherited base classes becoming private attributes __x of the derived class" are not attainable.
Compilation and execution models
C++
C++ compilation produces binary machine code aimed at execution directly by your CPU. If you want to extend a class from another project, you can only do so if you have access to additional information, in the form of header files, to describe what API is available. The compiler combines information in the header files with tables stored with the machine code and your source code to build more machine code; e.g. inheritance across library boundaries is handled through virtualisation tables.
Effectively, there is very little left of the objects used to construct the program with. You generally don't create references to class or method or function objects, the compiler has taken those abstract ideas as inputs but the output produced is machine code that doesn't need most of those concepts to exist any more. Variables (state, local variables in methods, etc.) are stored either on the heap or on the stack, and the machine code accesses these locations directly.
Privacy is used to direct compiler optimisations, because the compiler can, at all times, know exactly what code can change what state. Privacy also makes virtualisation tables and inheritance from 3rd-party libraries practical, as only the public interface needs to be exposed. Privacy is an efficiency measure, primarily.
Python
Python, on the other hand, runs Python code using a dedicated interpreter runtime, itself a piece of machine code compiled from C code, which has a central evaluation loop that takes Python-specific op-codes to execute your code. Python source code is compiled into bytecode roughly at the module and function levels, stored as a nested tree of objects.
These objects are fully introspectable, using a common model of attributes, sequences and mappings. You can subclass classes without having to have access to additional header files.
In this model, a class is an object with references to base classes, as well as a mapping of attributes (which includes any functions which become bound methods through access on instances). Any code to be executed when a method is called on an instance is encapsulated in code objects attached to function objects stored in the class attribute mapping. The code objects are already compiled to bytecode, and interaction with other objects in the Python object model is through runtime lookups of references, with the attribute names used for those lookups stored as constants within the compiled bytecode if the source code used fixed names.
From the point of view of executing Python code, variables (state and local variables) live in dictionaries (the Python kind, ignoring the internal implementation as hash maps) or, for local variables in functions, in an array attached to the stack frame object. The Python interpreter translates access to these to access to values stored on the heap.
This makes Python slow, but also much more flexible when executing. You can not only introspect the object tree, most of the tree is writeable letting you replace objects at will and so change how the program behaves in nearly limitless ways. And again, there are no privacy controls enforced.
Why use class adapters in C++, and not in Python
My understanding is that experienced C++ coders will use a class adapter (using subclassing) over an object adapter (using composition), because they need to pass compiler-enforced type checks (they need to pass the instances to something that requires the Target class or a subclass thereof), and they need to have fine control over object lifetimes and memory footprints. So, rather than have to worry about the lifetime or memory footprint of an encapsulated instance when using composition, subclassing gives you more complete control over the instance lifetime of your adapter.
This is especially helpful when it might not be practical or even possible to alter the implementation of how the adaptee class would control instance lifetime. At the same time, you wouldn't want to deprive the compiler from optimisation opportunities offered by private and protected attribute access. A class that exposes both the Target and Adaptee interfaces offers fewer options for optimisation.
In Python you almost never have to deal with such issues. Python's object lifetime handling is straightforward, predictable and works the same for every object anyway. If lifetime management or memory footprints were to become an issue you'd probably already be moving the implementation to an extension language like C++ or C.
Next, most Python APIs do not require a specific class or subclass. They only care about the right protocols, that is, if the right methods and attributes are implemented. As long as your Adapter has the right methods and attributes, it'll do fine. See Duck Typing; if your adapter walks like a duck, and talks like a duck, it surely must be a duck. It doesn't matter if that same duck can also bark like a dog.
The practical reasons why you don't do this in Python
Let's move to practicalities. We'll need to update your example Adaptee class to make it a bit more realistic:
class Adaptee:
def __init__(self, arg_foo=42):
self.state = "foo"
self._bar = arg_foo % 17 + 2 * arg_foo
def _ham_spam(self):
if self._bar % 2 == 0:
return f"ham: {self._bar:06d}"
return f"spam: {self._bar:06d}"
def specific_request(self):
return self._ham_spam()
This object not only has a state attribute, it also has a _bar attribute and a private method _ham_spam.
Now, from here on out I'm going to ignore the fact that your basic premise is flawed because there is no privacy model in Python, and instead re-interpret your question as a request to rename the attributes.
For the above example that would become:
state -> __state
_bar -> __bar
_ham_spam -> __ham_spam
specific_request -> __specific_request
You now have a problem, because the code in _ham_spam and specific_request has already been compiled. The implementation for these methods expects to find _bar and _ham_spam attributes on the self object passed in when called. Those names are constants in their compiled bytecode:
>>> import dis
>>> dis.dis(Adaptee._ham_spam)
8 0 LOAD_FAST 0 (self)
2 LOAD_ATTR 0 (_bar)
4 LOAD_CONST 1 (2)
6 BINARY_MODULO
# .. etc. remainder elided ..
The LOAD_ATTR opcode in the above Python bytecode disassembly excerpt will only work correctly if the local variable self has an attribute named _bar.
Note that self can be bound to an instance of Adaptee as well as of Adapter, something you'd have to take into account if you wanted to change how this code operates.
So, it is not enough to simply rename method and attribute names.
Overcoming this problem would require one of two approaches:
intercept all attribute access on both the class and instance levels to translate between the two models.
rewriting the implementations of all methods
Neither of these is a good idea. Certainly neither of them are going to be more efficient or practical, compared to creating a composition adapter.
Impractical approach #1: rewrite all attribute access
Python is dynamic, and you could intercept all attribute access on both the class and the instance levels. You need both, because you have a mix of class attributes (_ham_spam and specific_request), and instance attributes (state and _bar).
You can intercept instance-level attribute access by implementing all methods in the Customizing attribute access section (you don't need __getattr__ for this case). You'll have to be very careful, because you'll need access to various attributes of your instances while controlling access to those very attributes. You'll need to handle setting and deleting as well as getting. This lets you control most attribute access on instances of Adapter().
You would do the same at the class level by creating a metaclass for whatever class your private() adapter would return, and implementing the exact same hook methods for attribute access there. You'll have to take into account that your class can have multiple base classes, so you'd need to handle these as layered namespaces, using their MRO ordering. Attribute interactions with the Adapter class (such as Adapter._special_request to introspect the inherited method from Adaptee) will be handled at this level.
Sounds easy enough, right? Except than the Python interpreter has many optimisations to ensure it isn't completely too slow for practical work. If you start intercepting every attribute access on instances, you will kill a lot of these optimisations (such as the method call optimisations introduced in Python 3.7). Worse, Python ignores the attribute access hooks for special method lookups.
And you have now injected a translation layer, implemented in Python, invoked multiple times for every interaction with the object. This will be a performance bottleneck.
Last but not least, to do this in a generic way, where you can expect private(Adaptee) to work in most circumstances is hard. Adaptee could have other reasons to implement the same hooks. Adapter or a sibling class in the hierarchy could also be implementing the same hooks, and implement them in a way that means the private(...) version is simply bypassed.
Invasive all-out attribute interception is fragile and hard to get right.
Impractical approach #2: rewriting the bytecode
This goes down the rabbit hole quite a bit further. If attribute rewriting isn't practical, how about rewriting the code of Adaptee?
Yes, you could, in principle, do this. There are tools available to directly rewrite bytecode, such as codetransformer. Or you could use the inspect.getsource() function to read the on-disk Python source code for a given function, then use the ast module to rewrite all attribute and method access, then compile the resulting updated AST to bytecode. You'd have to do so for all methods in the Adaptee MRO, and produce a replacement class dynamically that'll achieve what you want.
This, again, is not easy. The pytest project does something like this, they rewrite test assertions to provide much more detailed failure information than otherwise possible. This simple feature requires a 1000+ line module to achieve, paired with a 1600-line test suite to ensure that it does this correctly.
And what you've then achieved is bytecode that doesn't match the original source code, so anyone having to debug this code will have to deal with the fact that the source code the debugger sees doesn't match up with what Python is executing.
You'll also lose the dynamic connection with the original base class. Direct inheritance without code rewriting lets you dynamically update the Adaptee class, rewriting the code forces a disconnect.
Other reason these approaches can't work
I've ignored a further issue that neither of the above approaches can solve. Because Python doesn't have a privacy model, there are plenty of projects out there where code interacts with class state directly.
E.g., what if your Adaptee() implementation relies on a utility function that will try to access state or _bar directly? It's part of the same library, the author of that library would be well within their rights to assume that accessing Adaptee()._bar is safe and normal. Neither attribute intercepting nor code rewriting will fix this issue.
I also ignored the fact that isinstance(a, Adaptee) will still return True, but if you have hidden it's public API by renaming, you have broken that contract. For better or worse, Adapter is a subclass of Adaptee.
TLDR
So, in summary:
Python has no privacy model. There is no point in trying to enforce one here.
The practical reasons that necessitate the class adapter pattern in C++, don't exist in Python
Neither dynamic attribute proxying nor code tranformation is going to be practical in this case and introduce more problems than are being solved here.
You should instead use composition, or just accept that your adapter is both a Target and an Adaptee and so use subclassing to implement the methods required by the new interface without hiding the adaptee interface:
class CompositionAdapter(Target):
def __init__(self, adaptee):
self._adaptee = adaptee
def request(self):
return self._adaptee.state + self._adaptee.specific_request()
class SubclassingAdapter(Target, Adaptee):
def request(self):
return self.state + self.specific_request()
Python doesn't have a way of defining private members like you've described (docs).
You could use encapsulation instead of inheritance and call the method directly, as you noted in your comment. This would be my preferred approach, and it feels the most "pythonic".
class Adapter(Target):
def request(self):
return Adaptee.specific_request(self)
In general, Python's approach to classes is much more relaxed than what is found in C++. Python supports duck-typing, so there is no requirement to subclass Adaptee, as long as the interface of Target is satisfied.
If you really want to use inheritance, you could override interfaces you don't want exposed to raise an AttributeError, and use the underscore convention to denote private members.
class Adaptee:
def specific_request(self):
return "foobar"
# make "private" copy
_specific_request = specific_request
class Adapter(Target, Adaptee):
def request(self):
# call "private" implementation
return self._specific_request()
def specific_request(self):
raise AttributeError()
This question has more suggestions if you want alternatives for faking private methods.
If you really wanted true private methods, you could probably implement a metaclass that overrides object.__getattribute__. But I wouldn't recommend it.
Everything is object in Python!
I was wondering, what the hell definition of object in Python ? I can't find a clear answer. any idea?
i have found some opinions:
1.objects can be assigned to a variable or passed as an argument to a function .
2.objects must have attributes and methods.
3.objects are subclassable.
An "Object is simply a collection of data (variables) and methods (functions) that act on those data." To learn more I recommend you spend some time reading Python Objects and Class and work though all the code samples.
From the Python documentation:
Classes
Classes provide a means of bundling data and functionality together. Creating a new class creates a new type of object, allowing new instances of that type to be made. Each class instance can have attributes attached to it for maintaining its state. Class instances can also have methods (defined by its class) for modifying its state.
Compared with other programming languages, Python’s class mechanism adds classes with a minimum of new syntax and semantics. It is a mixture of the class mechanisms found in C++ and Modula-3. Python classes provide all the standard features of Object Oriented Programming: the class inheritance mechanism allows multiple base classes, a derived class can override any methods of its base class or classes, and a method can call the method of a base class with the same name. Objects can contain arbitrary amounts and kinds of data. As is true for modules, classes partake of the dynamic nature of Python: they are created at runtime, and can be modified further after creation
Related: Python object conversion
I recently learned that Python allows you to change an instance's class like so:
class Robe:
pass
class Dress:
pass
r = Robe()
r.__class__ = Dress
I'm trying to figure out whether there is a case where 'transmuting' an object like this can be useful. I've messed around with this in IDLE, and one thing I've noticed is that assigning a different class doesn't call the new class's __init__ method, though this can be done explicitly if needed.
Virtually every use case I can think of would be better served by composition, but I'm a coding newb so what do I know. ;)
There is rarely a good reason to do this for unrelated classes, like Robe and Dress in your example. Without a bit of work, it's hard to ensure that the object you get in the end is in a sane state.
However, it can be useful when inheriting from a base class, if you want to use a non-standard factory function or constructor to build the base object. Here's an example:
class Base(object):
pass
def base_factory():
return Base() # in real code, this would probably be something opaque
def Derived(Base):
def __new__(cls):
self = base_factory() # get an instance of Base
self.__class__ = Derived # and turn it into an instance of Derived
return self
In this example, the Derived class's __new__ method wants to construct its object using the base_factory method which returns an instance of the Base class. Often this sort of factory is in a library somewhere, and you can't know for certain how it's making the object (you can't just call Base() or super(Derived, cls).__new__(cls) yourself to get the same result).
The instance's __class__ attribute is rewritten so that the result of calling Derived.__new__ will be an instance of the Derived class, which ensures that it will have the Derived.__init__ method called on it (if such a method exists).
I remember using this technique ages ago to “upgrade” existing objects after recognizing what kind of data they hold. It was a part of an experimental XMPP client. XMPP uses many short XML messages (“stanzas”) for communication.
When the application received a stanza, it was parsed into a DOM tree. Then the application needed to recognize what kind of stanza it is (a presence stanza, message, automated query etc.). If, for example, it was recognized as a message stanza, the DOM object was “upgraded” to a subclass that provided methods like “get_author”, “get_body” etc.
I could of course just make a new class to represent a parsed message, make new object of that class and copy the relevant data from the original XML DOM object. There were two benefits of changing object's class in-place, though. Firstly, XMPP is a very extensible standard, and it was useful to still have an easy access to the original DOM object in case some other part of the code found something useful there, or while debugging. Secondly, profiling the code told me that creating a new object and explicitly copying data is much slower than just reusing the object that would be quickly destroyed anyway—the difference was enough to matter in XMPP, which uses many short messages.
I don't think any of these reasons justifies the use of this technique in production code, unless maybe you really need the (not that big) speedup in CPython. It's just a hack which I found useful to make code a bit shorter and faster in the experimental application. Note also that this technique will easily break JIT engines in non-CPython implementations, making the code much slower!
Is it proper practice to have all code within classes? I have one class that does all my calculating and whatnot. But I have all the rest of the code (mainly used to call the class) outside of a class. It looks like this.
class bigClass:
executing here
functions and whatnot
blah blah
b=bigClass()
b.bigClassfunction()
My question is whether those last two lines should go in a class of their own? Or do I just leave them to float about not bound to a class.
That's absolutely OK, there's no need to put them in a class. A function could be an option if you need to repeat the code several times.
A class shouldn't be used for things like this; The role of class, as in Wikipedia, is
In object-oriented programming, a class is a construct that is used to
create instances of itself – referred to as class instances, class
objects, instance objects or simply objects. A class defines
constituent members which enable its instances to have state and
behavior. Data field members (member variables or instance
variables) enable a class instance to maintain state. Other kinds of
members, especially methods, enable the behavior of class instances.
Classes define the type of their instances.
Although you can embed this code in a class, it would be unnecessary to put this inside a class if it needs to be executed only once.
EDIT:
As I now understand, the confusion is about how to indicate python which code to run first, like you would do in java using a main method in the ProjectName class. In python, the code runs top-down. Each statement is being calculated on the go. That's why you cannot reference to a class above its definition, for example.
obj = Klass()
class Klass: pass #Doesn't work!
your question is not especially clear but you would always put all code related to a class within the class. It makes no design sense to do other wise.
Some people put their "main" code into a block such as:
if __name__ == '__main__':
foo()
bar()
See this thread for more information.
Do not use classes for the sake of having classes, however. It isn't very "Pythonic".
As stated in the pickle documentation, classes are normally pickled in such a way that they require the exact same class to be present in a module on the receiving end. However, I do note that there's also some __getstate__() and __setstate__() methods for classes, which affect how their instances are pickled...
How feasible would it be to create a metaclass that would allow pickling and unpickling of the classes created from that metaclass (in other words, the instances of that metaclass) even without the classes being present on the receiving end? (Though I think the metaclass would probably have to be present.)
Would utilizing a __reduce__() method in the class or metaclass also be something to look into?
The classes have to be somehow present on the receiving end, because methods are not stored with the objects. So, I think that using a specific metaclass unfortunately can't help, here…