Find a common class python - python

Having two python instances of some classes, let's say a is an instance of class A, and b is an instance of class B, is there an easy way to look for a common class for both objects? Also, since of course, all in Python inherits from 'object' I would like to know what the most 'specialized' common class.
I tried the following code, which seems to work:
def common_class(a,b):
A = a.__class__
B = b.__class__
for i in A.mro():
if i in B.mro():
return i
But I was wondering if there exists an easier way to do it.

Related

python how to abstract the class of member variable

suppose that i have a module like below
class B:
def ...
def ...
class A:
b: B
def ...
def ...
I use class B only as member variable of class A
when i try to abstract this module for my buisness logic, what should i do?
one big interface, which has abstract method for class A and class B
two interface, which has abstract method for class A and class B individually
all above are wrong. another way
Both, 1 & 2 are correct approach, but it completely depends on your application.
I think, two interfaces, which would have abstract method for class A and class B individually is the right approach when both of your classes have separate workings and are completely different from each other.
But, as you have mentioned in your code that you have inherited class B in class A. If you create a single interface for class A, it will also allow you to access the methods from class B. So, this approach is good. Also, this approach will shorten the length of your code, resulting in fast processing.
I hope this would help you to take your decision. Let me know if any other clarification required.

python - how to get a list of inner classes?

I'm writing a small Python application that contains a few nested classes, like the example below:
class SuperBar(object):
pass
class Foo(object):
NAME = 'this is foo'
class Bar(SuperBar):
MSG = 'this is how Bar handle stuff'
class AnotherBar(SuperBar):
MSG = 'this is how Another Bar handle stuff'
I'm using nested classes to create some sort of hierarchy and to provide a clean way to implement features for a parser.
At some point, I want to create a list of the inner classes. I'd like to have the following output:
[<class '__main__.Bar'>, <class '__main__.AnotherBar'>]
The question is: What is the recommended method to get a list of inner classes in a pythonic way?
I managed to get a list of inner class objects with the method below:
import inspect
def inner_classes_list(cls):
return [cls_attribute for cls_attribute in cls.__dict__.values()
if inspect.isclass(cls_attribute)
and issubclass(cls_attribute, SuperBar)]
It works, but I'm not sure if using __dict__ directly is a good thing to do. I'm using it because it contains the actual class instances that I need and seems to be portable across Python 2 and 3.
First: I can't see how nested classes can be of any use for you. Once you have an instance f of Foo, do you realize that f.Bar and f.AnotherBar will be the same object for all instances? That is - you can't record any attribute specific from f on f.Bar, like f.Bar.speed - or it will collide with an attribute from another instance g.Bar.speed.
To overcome this, and actually, the only thing that makes sense, you'd need to have instances of Bar and AnotherBar attached to the instance f. These instances usually can't be declared on the class body - you have to create them on your Foo's __init__ method.
The only thing that Bar and AntherBar can do doing there is: (1) to have a lot of class and static methods, then they work as namespaces only.
Or, if a metaclass for SuperBar or themselves implement the descriptor protocol - https://docs.python.org/3/reference/datamodel.html#implementing-descriptors - but them, you'd be much better if superbar itself would implement the descriptor prootocol (by having either __get__ or __set__ methods), and attached to Foo's body you'd have instances of these classes, not the classes themselves.
That said, you came with the solution of using __dict__ to getting the inner classes: that won't work if Foo itself inherit from other classes that also have nested classes. The Superclasses of Foo are never searched. You can have a method to either look on all classes on Foo's __mro__, or simply use dir and issubclass :
class Foo:
#classmethod
def inner_classes_list(cls):
results = []
for attrname in dir(cls):
obj = getattr(cls, attrname)
if isinstance(obj, type) and issubclass(obj, SuperBar):
results.append(obj)
return results
(If you want this to work to all classes like Foo that does not share a common base, the same code will work if it is nto declared as a class method, of course - and also, SuperBar can be a parameter to this function, if you have more than one nested-class hierarchy.)
Now you have this, we urge you to ask other questions saying what do you want to actually do - and to read about "descriptors" - and even "properties". Really: there is very little use one can think of to nested subclasses.

How to access globals from a different point in code

I'm trying to find a way of basically doing a late eval, using context from a different location in the code. As an example, I have a class Friend, and it can be used like this:
>>> class A:
... friend = Friend('B')
...
>>> class B:
... friend = Friend('A')
...
>>> A.friend.getobject()
<class '__main__.B'>
However, Friend is defined elsewhere in the code, in a separate library, and would look something like this:
class Friend:
def __init__(self, objname):
self.objname = objname
def getobject(self):
return eval(self.objname, original_context)
The sqlalchemy ORM has a similar pattern for defining columns, but they implement it by tracking all owning classes (i.e. tables) in a session. I could do something similar if I need to, but I'd like to know if there is another way to do this. I've been looking at frames and the interpreter stack, and I think I can get to the relevant frame's locals using something like inspect.stack()[1][0].f_locals, but I would have to do this in Frame.__init__ which is called before the object is defined.
My questions is how to find original_context, but only at the time it is needed. This comes down to two issues:
1. How to access the environment (or frame?) in which Friend was instantiated.
2. How to access it at the time getobject is called.
You will have to use the fully qualified classname, ie:
class A:
friend = Friend('themodule.B')
And then take that string, extract out the module and import the B class, and generate it like this.
However, in general, a better way is to do:
class A:
friend = Friend(B)
In this case B isn't defined at that point, but you can easily do:
class A:
pass
class B:
pass
A.friend = Friend(B)
B.friend = Friend(A)

How do I dynamically add mixins as base classes without getting MRO errors?

Say I have a class A, B and C.
Class A and B are both mixin classes for Class C.
class A( object ):
pass
class B( object ):
pass
class C( object, A, B ):
pass
This will not work when instantiating class C. I would have to remove object from class C to make it work. (Else you'll get MRO problems).
TypeError: Error when calling the metaclass bases
Cannot create a consistent method resolution
order (MRO) for bases B, object, A
However, my case is a bit more complicated. In my case class C is a server where A and B will be plugins that are loaded on startup. These are residing in their own folder.
I also have a Class named Cfactory. In Cfactory I have a __new__ method that will create a fully functional object C. In the __new__ method I search for plugins, load them using __import__, and then assign them to C.__bases__ += (loadedClassTypeGoesHere, )
So the following is a possibility: (made it quite abstract)
class A( object ):
def __init__( self ): pass
def printA( self ): print "A"
class B( object ):
def __init__( self ): pass
def printB( self ): print "B"
class C( object ):
def __init__( self ): pass
class Cfactory( object ):
def __new__( cls ):
C.__bases__ += ( A, )
C.__bases__ += ( B, )
return C()
This again will not work, and will give the MRO errors again:
TypeError: Cannot create a consistent method resolution
order (MRO) for bases object, A
An easy fix for this is removing the object baseclass from A and B. However this will make them old-style objects which should be avoided when these plugins are being run stand-alone (which should be possible, UnitTest wise)
Another easy fix is removing object from C but this will also make it an old-style class and C.__bases__ will be unavailable thus I can't add extra objects to the base of C
What would be a good architectural solution for this and how would you do something like this? For now I can live with old-style classes for the plugins themselves. But I rather not use them.
Think of it this way -- you want the mixins to override some of the behaviors of object, so they need to be before object in the method resolution order.
So you need to change the order of the bases:
class C(A, B, object):
pass
Due to this bug, you need C not to inherit from object directly to be able to correctly assign to __bases__, and the factory really could just be a function:
class FakeBase(object):
pass
class C(FakeBase):
pass
def c_factory():
for base in (A, B):
if base not in C.__bases__:
C.__bases__ = (base,) + C.__bases__
return C()
I don't know the details, so maybe I'm completely off-base here, but it seems like you're using the wrong mechanisms to achieve your design.
First off, why is Cfactory a class, and why does its __new__ method return an instance of something else? That looks like a bizarre way to implement what is quite naturally a function. Cfactory as you've described it (and shown a simplified example) doesn't behave at all like a class; you don't have multiple instances of it that share functionality (in fact it looks like you've made it impossible to construct instances of naturally).
To be honest, C doesn't look very much like a class to me either. It seems like you can't be creating more than one instance of it, otherwise you'd end up with an ever-growing bases list. So that makes C basically a module rather than a class, only with extra boilerplate. I try to avoid the "single-instance class to represent the application or some external system" pattern (though I know it's popular because Java requires that you use it). But the class inheritance mechanism can often be handy for things that aren't really classes, such as your plugin system.
I would've done this with a classmethod on C to find and load plugins, invoked by the module defining C so that it's always in a good state. Alternatively you could use a metaclass to automatically add whatever plugins it finds to the class bases. Mixing the mechanism for configuring the class in with the mechanism for creating an instance of the class seems wrong; it's the opposite of flexible de-coupled design.
If the plugins can't be loaded at the time C is created, then I would go with manually invoking the configurator classmethod at the point when you can search for plugins, before the C instance is created.
Actually, if the class can't be put into a consistent state as soon as it's created I would probably rather go for dynamic class creation than modifying the bases of an existing class. Then the system isn't locked into the class being configured once and instantiated once; you're at least open to the possibility of having multiple instances with different sets of plugins loaded. Something like this:
def Cfactory(*args, **kwargs):
plugins = find_plugins()
bases = (C,) + plugins
cls = type('C_with_plugins', bases, {})
return cls(*args, **kwargs)
That way, you have your single call to create your C instance with gives you a correctly configured instance, but it doesn't have strange side effects on any other hypothetical instances of C that might already exist, and its behaviour doesn't depend on whether it's been run before. I know you probably don't need either of those two properties, but it's barely more code than you have in your simplified example, and why break the conceptual model of what classes are if you don't have to?
There is a simple workaround: Create a helper-class, with a nice name, like PluginBase. And use that the inherit of, instead of object.
This makes the code more readable (imho) and it circumstances the bug.
class PluginBase(object): pass
class ServerBase(object): pass
class pluginA(PluginBase): "Now it is clearly a plugin class"
class pluginB(PluginBase): "Another plugin"
class Server1(ServerBase, pluginA, pluginB): "This works"
class Server2(ServerBase): pass
Server2.__bases__ += (pluginA,) # This also works
As note: Probably you don't need the factory; it's needed in C++, but hardly in Python

Is there a way to implement methods like __len__ or __eq__ as classmethods?

It is pretty easy to implement __len__(self) method in Python so that it handles len(inst) calls like this one:
class A(object):
def __len__(self):
return 7
a = A()
len(a) # gives us 7
And there are plenty of alike methods you can define (__eq__, __str__, __repr__ etc.).
I know that Python classes are objects as well.
My question: can I somehow define, for example, __len__ so that the following works:
len(A) # makes sense and gives some predictable result
What you're looking for is called a "metaclass"... just like a is an instance of class A, A is an instance of class as well, referred to as a metaclass. By default, Python classes are instances of the type class (the only exception is under Python 2, which has some legacy "old style" classes, which are those which don't inherit from object). You can check this by doing type(A)... it should return type itself (yes, that object has been overloaded a little bit).
Metaclasses are powerful and brain-twisting enough to deserve more than the quick explanation I was about to write... a good starting point would be this stackoverflow question: What is a Metaclass.
For your particular question, for Python 3, the following creates a metaclass which aliases len(A) to invoke a class method on A:
class LengthMetaclass(type):
def __len__(self):
return self.clslength()
class A(object, metaclass=LengthMetaclass):
#classmethod
def clslength(cls):
return 7
print(len(A))
(Note: Example above is for Python 3. The syntax is slightly different for Python 2: you would use class A(object):\n __metaclass__=LengthMetaclass instead of passing it as a parameter.)
The reason LengthMetaclass.__len__ doesn't affect instances of A is that attribute resolution in Python first checks the instance dict, then walks the class hierarchy [A, object], but it never consults the metaclasses. Whereas accessing A.__len__ first consults the instance A, then walks it's class hierarchy, which consists of [LengthMetaclass, type].
Since a class is an instance of a metaclass, one way is to use a custom metaclass:
>>> Meta = type('Meta', (type,), {'__repr__': lambda cls: 'class A'})
>>> A = Meta('A', (object,), {'__repr__': lambda self: 'instance of class A'})
>>> A
class A
>>> A()
instance of class A
I fail to see how the Syntax specifically is important, but if you really want a simple way to implement it, just is the normal len(self) that returns len(inst) but in your implementation make it return a class variable that all instances share:
class A:
my_length = 5
def __len__(self):
return self.my_length
and you can later call it like that:
len(A()) #returns 5
obviously this creates a temporary instance of your class, but length only makes sense for an instance of a class and not really for the concept of a class (a Type object).
Editing the metaclass sounds like a very bad idea and unless you are doing something for school or to just mess around I really suggest you rethink this idea..
try this:
class Lengthy:
x = 5
#classmethod
def __len__(cls):
return cls.x
The #classmethod allows you to call it directly on the class, but your len implementation won't be able to depend on any instance variables:
a = Lengthy()
len(a)

Categories