I checked the SocketServer.py package.
Find the one function of class BaseServer--- '_handle_request_noblock()'
It has :request, client_address = self.get_request()
But the function get_request() is built from BaseServer's subclass TCPServer and UDPServer.
How can I understand this?
On top of Dynamic Dispatch as referred to by #JeremyFisher, such behavior is also related to the OOP concepts of Inheritance and Polymorphism (or this) which in turn is related to Liskov's Substitution Principle. In Python, this behavior is visible in MRO.
Let's say we have this hierarchy of classes
>>> class BaseServer:
... def handle_request_noblock(self):
... print("BaseServer:handle_request_noblock")
... self.get_request()
...
>>> class TCPServer(BaseServer):
... def get_request(self):
... print("TCPServer:get_request")
...
>>>
>>> class UDPServer(BaseServer):
... def get_request(self):
... print("UDPServer:get_request")
...
>>>
Let's see the attributes of the child class objects
>>> # base_server = BaseServer() # This should not be done because "abstract" base classes are just blueprints, thus they are incomplete as in our example here where it has no definition of <get_request>. A derived "concrete" subclass is needed.
>>>
>>> tcp_server = TCPServer()
>>> udp_server = UDPServer()
>>>
>>> print(dir(tcp_server))
[...<trimmed for better viewing>..., 'get_request', 'handle_request_noblock']
>>> print(dir(udp_server))
[...<trimmed for better viewing>..., 'get_request', 'handle_request_noblock']
>>>
As you can see, both subclasses has the handle_request_noblock attribute even if they don't explicitly define it, they just inherited it from the base class.
an object created through inheritance, a "child object", acquires all the properties and behaviors of the "parent object"
Then, they have the get_request which they implemented on their own.
Let's call now the entrypoint function
>>> tcp_server.handle_request_noblock()
BaseServer:handle_request_noblock
TCPServer:get_request
>>> udp_server.handle_request_noblock()
BaseServer:handle_request_noblock
UDPServer:get_request
>>>
As you can see, BaseServer:handle_request_noblock was able to call e.g. TCPServer:get_request. Why? Remember that the object we used is a TCPServer instance and based on the available methods we displayed on the earlier step, we saw that both handle_request_noblock and get_request are available to it, so we know that both can be called. Same with UDPServer.
In Python, the technicalities of such dispatch from base to derived class is through MRO.
Python supports classes inheriting from other classes. The class being inherited is called the Parent or Superclass, while the class that inherits is called the Child or Subclass. In python, method resolution order defines the order in which the base classes are searched when executing a method. First, the method or attribute is searched within a class and then it follows the order we specified while inheriting.
>>> print(TCPServer.mro())
[<class '__main__.TCPServer'>, <class '__main__.BaseServer'>, <class 'object'>]
>>> print(UDPServer.mro())
[<class '__main__.UDPServer'>, <class '__main__.BaseServer'>, <class 'object'>]
>>>
So for TCPServer
This means that when we called handle_request_noblock, it checked first if the implementation is present in class TCPServer, since it isn't, it checks to the next which is class BaseServer, since it is there, then it uses that implementation, so if you override it within TCPServer, it wouldn't have to call the implementation in the BaseServer anymore.
Then when we called get_request, it checked first if the implementation is present in class TCPServer, since it is there, then it uses that implementation, no more need to check if it is present in BaseServer.
Same idea with UDPServer.
Related
A method defined on a metaclass is accessible by classes that use the metaclass. However, the method will not be accessible on the instances of these classes.
My first guess was that metaclass methods would not be accessible on either classes or instances.
My second guess was that metaclass methods would be accessible on both classes and instances.
I find it surprising that metaclass methods are instead accessible on classes, but not on instances.
What is the purpose of this behavior? Is there any case where I can use this to an advantage? If there is no intended purpose, how does the implementation work such that this is the resulting behavior?
class Meta(type):
def __new__(mcs, name, bases, dct):
mcs.handle_foo(dct)
return type.__new__(mcs, name, bases, dct)
#classmethod
def handle_foo(mcs, dct):
"""
The sole purpose of this method is to encapsulate some logic
instead of writing code directly in __new__,
and also that subclasses of the metaclass can override this
"""
dct['foo'] = 1
class Meta2(Meta):
#classmethod
def handle_foo(mcs, dct):
"""Example of Metaclass' subclass overriding"""
dct['foo'] = 10000
class A(metaclass=Meta):
pass
class B(metaclass=Meta2):
pass
assert A.foo == 1
assert B.foo == 10000
assert hasattr(A, 'handle_foo')
assert hasattr(B, 'handle_foo')
# What is the purpose or reason of this method being accessible on A and B?
# If there is no purpose, what about the implementation explains why it is accessible here?
instance = A()
assert not hasattr(instance, 'handle_foo')
# Why is this method not accessible on the instance, when it is on the class?
# What is the purpose or reason for this method not being accessible on the instance?
What is the purpose of this behavior? What use case is this behavior intended to support? I am interested in a direct quote from the documentation, if one exists.
If there is no purpose, and this is simply a byproduct of the implementation, why does the implementation result in this behavior? I.e., how are metaclasses implemented such that the methods defined on the metaclass are also defined accessible on classes that use the metaclass, but not the instantiated objects of these classes?
There is only one practical implication of this that I have found is the following: Pycharm will include these metaclass functions in the code completion box when you start typing A. (i.e., the class). I don't want users of my framework to see this. One way to mitigate this as by renaming these methods as private methods (e.g. _handle_foo), but still I would rather these methods not show up in code completion at all. Using a dunder naming convention (__) won't work, as subclasses of the metaclass will not be able to override the methods.
(I've edited this post extensively due to the thoughtful feedback from Miyagi and Serge, in order to make it more clear as to why I am defining methods on the metaclass in the first place: simply in order to encapsulate some behavior instead of putting all the code in __new__, and to allow those methods to be overridden by subclasses of the metaclass)
Let us first look at this in a non-meta situation: We define a function inside a class and access it via the instance.
>>> class Foo:
... def bar(self): ...
...
>>> Foo.bar
<function __main__.Foo.bar(self)>
>>> foo = Foo()
>>> foo.bar
<bound method Foo.bar of <__main__.Foo object at 0x10dc75790>>
Of note is that the two "attributes" are not the same kind: The class' attribute is the very thing we put into it, but the instance's "attribute" is a dynamically created thing.
Likewise, methods defined on a metaclass are not inherited by the class, they are (dynamically) bound to its classes.
>>> Meta.meta_method # direct access to "class" attribute
<function __main__.Meta.meta_method(cls)>
>>> Foo.meta_method # instance access to "class" attribute
<bound method Meta.meta_method of <class '__main__.Foo'>>
This is the exact same mechanism – because a class is "just" a metaclass' instance.
It should be obvious at this point that the attributes defined on the metaclass and dynamically bound to the class are not the same thing, and there is no reason for them to behave the same. Whether lookup of attributes on an instance picks up metaclass-methods from their dynamic form on the class, directly from the metaclass or not at all is a judgement call.
Python's data model defines that default lookup only takes into account the instance and the instance's type. The instance's type's type is explicitly excluded.
Invoking Descriptors
[…]
The default behavior for attribute access is to get, set, or delete the attribute from an object’s dictionary. For instance, a.x has a lookup chain starting with a.__dict__['x'], then type(a).__dict__['x'], and continuing through the base classes of type(a) excluding metaclasses.
There is no rationale given for this approach. However, it is sufficient to replicate common instantiation+inheritance behaviour of other languages. At the same time, it avoids arbitrarily deep lookups and the issue that type is a recursive metaclass.
Notably, since a metaclass is in full control of how a class behaves, it can directly define methods in the class or redefine attribute access to circumvent the default behaviour.
Like in java or C# where we can create an object like
Account obj = new SavingsAccount();
where Account is the parent class and SavingsAccount is the child class
How do I do the same thing with python?
basically I'm trying to do is this
https://repl.it/#MushfiqurRahma1/Polymorphic-Object
Python is dynamically typed: names refer to objects without any notion of type being involved. You just have
>>> class Account: pass
...
>>> class SavingsAccount(Account): pass
...
>>> obj = SavingsAccount()
Each object stores a reference to its own type
>>> type(obj)
<class '__main__.SavingsAccount'>
and each type has a referent to its method resolution order (MRO)
>>> type(obj).__mro__
(<class '__main__.SavingsAccount'>, <class '__main__.Account'>, <class 'object'>)
Instance attributes are not "compartmentalized" according to the class that "defines" them; each attribute simply exists on the instance itself, without reference to any particular class.
Methods exist solely in the classes themselves; when it comes time to call obj.foo(), the MRO is used to determine whose definition is used.
Python uses Duck typing, which means you shouldn’t really care about the class in the Left side of the assignment. Just instantiate the Child class and you should already be able to use it’s parent “interface” to do stuff like dynamic method calls.
Python is a loosely typed language. That means you don't have to specify your variable types. You could just do something like:
class Account():
def deposit():
pass
class SavingsAccount(Account):
pass
obj = SavingsAccount()
obj.deposit(20)
EDIT: As chepner pointed out: Python is strongly typed, but also dynamically typed: type is associated with an object, not the name referring to an object.
I am investigating the source code of a package, and noticed that classes are able to call certain methods that aren't defined within the class.
For example:
inst = ClassA()
meth = inst.meth1()
"some stuff printing to console"
However, meth1() is not defined within ClassA. In the ClassA definition, there is an input that references another class:
from package.sub.file import ClassB
class ClassA(ClassB):
...normal class stuff...
From another file:
class ClassB:
...normal class stuff...
def meth1(self):
...stuff...
My main question is how is this possible? How does meth1 become a method for ClassA? I am confused as to why passing ClassB as an input transfers all the methods associated with ClassB to ClassA
This is inheritance, a common concept in object-oriented programming.
When one class (the child) inherits from another (the parent), an instance of the child is treated exactly the same as an instance of the parent. That means that if a parent defines a method, it is available to instance of its child as well.
As to how Python implements inheritance, buckle up :)
When ClassA is created, it has an attribute called the method-resolution order (MRO) added to it.
>>> ClassA.__mro__
(<class '__main__.ClassA'>, <class '__main__.ClassB'>, <class 'object'>)
This is built using ClassA, its parent classes, and their parent classes, all the way up to object, the ultimate base class; and it is used for all sorts of attribute lookups.
A somewhat abridged account:
When you try to call inst.meth1, Python goes through the following steps (some steps omitted for clarity and brevity):
Does inst.meth1 exist? No. Start checking the classes in the MRO
Does ClassA.meth1 exist? No, check the next class.
Does ClassB.meth1 exist? Yes.
What is ClassB.meth1? It's a function; call it with inst as the first argument.
Thus (without going into the descriptor protocol in detail), inst.meth1() is equivalent to ClassB.meth1(inst).
I want to use the superclass to call the parent method of a class while using a different class.
Class AI():
...
for i in self.initial_computer_group:
if i.rect.x == current_coords[0] and i.rect. y== current_coords[1]:
i.move(coords_to_move[0], coords_to_move[1])
i.move() calls a method from an inherited class, when I want the original method from the parent class.
self.initial_computer_group contains a list of objects which are completely unrelated to the AI class.
I know I need to somehow get the class name of the current object i references to, but then I don't know what to use as the second argument in super() as i can't use self, since it's unrelated to AI.
So how do I use super() when I'm in a completely different class to what super is meant to call?
Note: I want to call the parent method as it speeds everything up. I only designed the inherited method to ensure the human isn't breaking the rules in this chess game.
EDIT: I found a solution by changing the name of the inherited method to something else, but I was wondering whether there's still a special way to invoke super() to solve the problem
It sounds like you want to call a specific class's method, no matter what the inheritance graph looks like (and in particular, even if that method happens to be overridden twice). In that case, you don't want super. Instead, call the class's method directly. For example, assuming the version you want is in the Foo class:
Foo.move(i, coords_to_move[0], coords_to_move[1])
As it's hard to read code in comments, here's a simple example:
class BaseClass():
def func(self):
print("Here in BaseClass.")
class InheritedClass(BaseClass):
def func(self):
print("Here in InheritedClass.")
def func(instance):
super(InheritedClass, instance).func()
In use:
>>> func(InheritedClass())
Here in BaseClass.
But this clearly makes your code less flexible (as the instance argument must be an InheritedClass instance), and should generally be avoided.
Given some inheritance hierarchy:
class Super: # descends from object
def func():
return 'Super calling'
class Base(Super):
def func():
return 'Base calling'
class Sub(Base):
def func():
return 'Sub calling'
You can get the resolution hierarchy with the __mro__ attribute:
>>> s=Sub()
>>> s.__class__.__mro__
(<class '__main__.Sub'>, <class '__main__.Base'>, <class '__main__.Super'>, <class 'object'>)
Then you can pick among those by index:
>>> s.__class__.__mro__[-2]
<class '__main__.Super'>
>>> s.__class__.__mro__[-2].func()
Super calling
You can get a specific name by matching against the __name__ attribute:
def by_name(inst, tgt):
for i, c in enumerate(inst.__class__.__mro__):
if c.__name__==tgt:
return i
return -1
Then if you want to call the parent class of an unrelated class, just use one of these methods on an instance of the descendant class with the method of interest.
Of course the simplest answer is if you know the class and method you want, just call it directly:
>>> Super.func()
Super calling
>>> Base.func()
Base calling
If you need to go several levels up (or an unknown number of levels up) to find the method, Python will do that for you:
class Super:
def func():
return 'Super calling'
class Base(Super):
pass
class Sub(Base):
pass
>>> Sub.func()
Super calling
I am using Django 1.3.1 and I have the following piece of models:
class masterData(models.Model):
uid = models.CharField(max_length=20,primary_key=True)
class Meta:
abstract = True;
class Type1(masterData):
pass;
class Type2(masterData):
pass;
Now, I am trying to get a list of all child classes of masterData. I
have tried:
masterData.__subclasses__()
The very interesting thing that I found about the above is that it
works flawlessly in python manage.py shell and does not work at all
when running the webserver!
So how do I get a list of models derived from an Abstract Base Class model?
Thanks :)
Metaclass for defining Abstract Base Classes (ABCs).
Use this metaclass to create an ABC. An ABC can be subclassed directly, and then acts as a mix-in
class. You can also register unrelated concrete classes (even built-in classes) and unrelated ABCs
as 'virtual subclasses' -- these and their descendants will be considered subclasses of the
registering ABC by the built-in issubclass() function, but the registering ABC won't show up in
their MRO (Method Resolution Order) nor will method implementations defined by the registering
ABC be callable (not even via super()).
I haven't used ABCMeta much (have a bit today actually..). You need to use the 'issubclass()' function
since the ABC won't show up in they're mro.
If you used inheritance, subclasses() would work.
>>> class foo(object):
... pass
...
>>> class bar(foo):
... pass
...
>>> a = bar()
>>> a.__class__.__mro__
(<class '__main__.bar'>, <class '__main__.foo'>, <type 'object'>)
>>> foo.__subclasses__()
[<class '__main__.bar'>]