I am using imp.find_module and then imp.load_module to load 'example', now I want to make a list of just the functions in example.py that are functions of Class A but I can't seem to find a getattr attribute that is unique to Classes which would filter out all other methods in dir(example).
for i in dir(example):
if hasattr(getattr(example, i), <some_attribute>):
print i
If you search for existing solution, use builtin inspect module, it has plenty of functions to test for specific types, isclass for your case:
import inspect
class Foo(object):
pass
if inspect.isclass(Foo):
print("Yep, it's class")
However, if you want to get into depths, there are few other approaches.
In Python everything is an instance of something. Classes are not an exclusion, they are instances of metaclasses. In Python 2 there are two kinds of classes — old-style (class Foo: pass) and new-style (class Foo(object): pass). Old-style classes are instances of classobj, visible as types.ClassType, while new-style classes are instances of type, which itself is both function and metaclass at the same time (callable metaclass to be strict). In Python 3, there are only new-style classes, always derived from object (which in turn is instance of type).
So, you can check if Foo is class, by issuing if it's an instance of metaclass producing classes:
class Foo(object):
pass
if isinstance(Foo, type):
print("Yep, it's new-style class")
Or for old-style:
import types
class Foo:
pass
if isinstance(Foo, types.ClassType):
print("Yep, it's old-style class")
You can also take a look at data model and list of class-specific magic fields.
Related
A method defined on a metaclass is accessible by classes that use the metaclass. However, the method will not be accessible on the instances of these classes.
My first guess was that metaclass methods would not be accessible on either classes or instances.
My second guess was that metaclass methods would be accessible on both classes and instances.
I find it surprising that metaclass methods are instead accessible on classes, but not on instances.
What is the purpose of this behavior? Is there any case where I can use this to an advantage? If there is no intended purpose, how does the implementation work such that this is the resulting behavior?
class Meta(type):
def __new__(mcs, name, bases, dct):
mcs.handle_foo(dct)
return type.__new__(mcs, name, bases, dct)
#classmethod
def handle_foo(mcs, dct):
"""
The sole purpose of this method is to encapsulate some logic
instead of writing code directly in __new__,
and also that subclasses of the metaclass can override this
"""
dct['foo'] = 1
class Meta2(Meta):
#classmethod
def handle_foo(mcs, dct):
"""Example of Metaclass' subclass overriding"""
dct['foo'] = 10000
class A(metaclass=Meta):
pass
class B(metaclass=Meta2):
pass
assert A.foo == 1
assert B.foo == 10000
assert hasattr(A, 'handle_foo')
assert hasattr(B, 'handle_foo')
# What is the purpose or reason of this method being accessible on A and B?
# If there is no purpose, what about the implementation explains why it is accessible here?
instance = A()
assert not hasattr(instance, 'handle_foo')
# Why is this method not accessible on the instance, when it is on the class?
# What is the purpose or reason for this method not being accessible on the instance?
What is the purpose of this behavior? What use case is this behavior intended to support? I am interested in a direct quote from the documentation, if one exists.
If there is no purpose, and this is simply a byproduct of the implementation, why does the implementation result in this behavior? I.e., how are metaclasses implemented such that the methods defined on the metaclass are also defined accessible on classes that use the metaclass, but not the instantiated objects of these classes?
There is only one practical implication of this that I have found is the following: Pycharm will include these metaclass functions in the code completion box when you start typing A. (i.e., the class). I don't want users of my framework to see this. One way to mitigate this as by renaming these methods as private methods (e.g. _handle_foo), but still I would rather these methods not show up in code completion at all. Using a dunder naming convention (__) won't work, as subclasses of the metaclass will not be able to override the methods.
(I've edited this post extensively due to the thoughtful feedback from Miyagi and Serge, in order to make it more clear as to why I am defining methods on the metaclass in the first place: simply in order to encapsulate some behavior instead of putting all the code in __new__, and to allow those methods to be overridden by subclasses of the metaclass)
Let us first look at this in a non-meta situation: We define a function inside a class and access it via the instance.
>>> class Foo:
... def bar(self): ...
...
>>> Foo.bar
<function __main__.Foo.bar(self)>
>>> foo = Foo()
>>> foo.bar
<bound method Foo.bar of <__main__.Foo object at 0x10dc75790>>
Of note is that the two "attributes" are not the same kind: The class' attribute is the very thing we put into it, but the instance's "attribute" is a dynamically created thing.
Likewise, methods defined on a metaclass are not inherited by the class, they are (dynamically) bound to its classes.
>>> Meta.meta_method # direct access to "class" attribute
<function __main__.Meta.meta_method(cls)>
>>> Foo.meta_method # instance access to "class" attribute
<bound method Meta.meta_method of <class '__main__.Foo'>>
This is the exact same mechanism – because a class is "just" a metaclass' instance.
It should be obvious at this point that the attributes defined on the metaclass and dynamically bound to the class are not the same thing, and there is no reason for them to behave the same. Whether lookup of attributes on an instance picks up metaclass-methods from their dynamic form on the class, directly from the metaclass or not at all is a judgement call.
Python's data model defines that default lookup only takes into account the instance and the instance's type. The instance's type's type is explicitly excluded.
Invoking Descriptors
[…]
The default behavior for attribute access is to get, set, or delete the attribute from an object’s dictionary. For instance, a.x has a lookup chain starting with a.__dict__['x'], then type(a).__dict__['x'], and continuing through the base classes of type(a) excluding metaclasses.
There is no rationale given for this approach. However, it is sufficient to replicate common instantiation+inheritance behaviour of other languages. At the same time, it avoids arbitrarily deep lookups and the issue that type is a recursive metaclass.
Notably, since a metaclass is in full control of how a class behaves, it can directly define methods in the class or redefine attribute access to circumvent the default behaviour.
How can I perform the equivalent of __setattr__ on an old-style class?
If what you want to do is set the attribute for an instance of old style class, you can use the setattr built-in function, it should work for old-style classes as well . Example -
>>> class Foo:
... def __init__(self,blah):
... self.blah=blah
...
>>> foo = Foo("Something")
>>> foo.blah
'Something'
>>> setattr(foo,'blah','somethingElse')
>>> foo.blah
'somethingElse'
You should use the built-in function for instance any type of class.
Since the original question accepts "...equivalent methods," I'd like to demonstrate the proper means of implementing the special __setattr__() method in old-style classes.
tl;dr
Use self.__dict__[attr_name] = attr_value in the __setattr__(self, attr_name, attr_value) method of an old-style class.
__setattr__() Meets Old-style Class
Interestingly, both Python 2.7 and 3 do call __setattr__() methods defined by old-style classes. Unlike new-style classes, however, old-style classes provide no default __setattr__() method. To no one's surprise, this hideously complicates __setattr__() methods in old-style classes.
In the subclass __setattr__() of a new-style class, the superclass __setattr__() is usually called at the end to set the desired attribute. In the subclass __setattr__() of an old-style class, this usually raises an exception; in most cases, there is no superclass __setattr__(). Instead, the desired key-value pair of the special __dict__ instance variable must be manually set.
Example or It Didn't Happen
Consider a great old-style class resembling the phrase "The Black Goat of the Woods with a Thousand Young" and defining __setattr__() to prefix the passed attribute name by la_:
class ShubNiggurath:
def __setattr__(self, attr_name, attr_value):
# Do not ask why. It is not of human purport.
attr_name = 'la_' + attr_name
# Make it so. Do not call
# super(ShubNiggurath, self).__setattr__(attr_name, attr_value), for no
# such method exists.
self.__dict__[attr_name] = attr_value
Asymmetries in the Darkness
Curiously, old-style classes do provide a default __getattr__() method. How Python 2.7 permitted this obscene asymmetry to stand bears no thinking upon – for it is equally hideous and shameful!
But it is.
I want to ask what the with_metaclass() call means in the definition of a class.
E.g.:
class Foo(with_metaclass(Cls1, Cls2)):
Is it a special case where a class inherits from a metaclass?
Is the new class a metaclass, too?
with_metaclass() is a utility class factory function provided by the six library to make it easier to develop code for both Python 2 and 3.
It uses a little sleight of hand (see below) with a temporary metaclass, to attach a metaclass to a regular class in a way that's cross-compatible with both Python 2 and Python 3.
Quoting from the documentation:
Create a new class with base class base and metaclass metaclass. This is designed to be used in class declarations like this:
from six import with_metaclass
class Meta(type):
pass
class Base(object):
pass
class MyClass(with_metaclass(Meta, Base)):
pass
This is needed because the syntax to attach a metaclass changed between Python 2 and 3:
Python 2:
class MyClass(object):
__metaclass__ = Meta
Python 3:
class MyClass(metaclass=Meta):
pass
The with_metaclass() function makes use of the fact that metaclasses are a) inherited by subclasses, and b) a metaclass can be used to generate new classes and c) when you subclass from a base class with a metaclass, creating the actual subclass object is delegated to the metaclass. It effectively creates a new, temporary base class with a temporary metaclass metaclass that, when used to create the subclass swaps out the temporary base class and metaclass combo with the metaclass of your choice:
def with_metaclass(meta, *bases):
"""Create a base class with a metaclass."""
# This requires a bit of explanation: the basic idea is to make a dummy
# metaclass for one level of class instantiation that replaces itself with
# the actual metaclass.
class metaclass(type):
def __new__(cls, name, this_bases, d):
return meta(name, bases, d)
#classmethod
def __prepare__(cls, name, this_bases):
return meta.__prepare__(name, bases)
return type.__new__(metaclass, 'temporary_class', (), {})
Breaking the above down:
type.__new__(metaclass, 'temporary_class', (), {}) uses the metaclass metaclass to create a new class object named temporary_class that is entirely empty otherwise. type.__new__(metaclass, ...) is used instead of metaclass(...) to avoid using the special metaclass.__new__() implementation that is needed for the slight of hand in a next step to work.
In Python 3 only, when temporary_class is used as a base class, Python first calls metaclass.__prepare__() (passing in the derived class name, (temporary_class,) as the this_bases argument. The intended metaclass meta is then used to call meta.__prepare__(), ignoring this_bases and passing in the bases argument.
next, after using the return value of metaclass.__prepare__() as the base namespace for the class attributes (or just using a plain dictionary when on Python 2), Python calls metaclass.__new__() to create the actual class. This is again passed (temporary_class,) as the this_bases tuple, but the code above ignores this and uses bases instead, calling on meta(name, bases, d) to create the new derived class.
As a result, using with_metaclass() gives you a new class object with no additional base classes:
>>> class FooMeta(type): pass
...
>>> with_metaclass(FooMeta) # returns a temporary_class object
<class '__main__.temporary_class'>
>>> type(with_metaclass(FooMeta)) # which has a custom metaclass
<class '__main__.metaclass'>
>>> class Foo(with_metaclass(FooMeta)): pass
...
>>> Foo.__mro__ # no extra base classes
(<class '__main__.Foo'>, <type 'object'>)
>>> type(Foo) # correct metaclass
<class '__main__.FooMeta'>
UPDATE: the six.with_metaclass() function has since been patched with a decorator variant, i.e. #six.add_metaclass(). This update fixes some mro issues related to the base objects. The new decorator would be applied as follows:
import six
#six.add_metaclass(Meta)
class MyClass(Base):
pass
Here are the patch notes and here is a similar, detailed example and explanation for using a decorator alternative.
This question already has answers here:
Why do Python classes inherit object?
(6 answers)
Closed 1 year ago.
I have found that both of the following work:
class Foo():
def a(self):
print "hello"
class Foo(object):
def a(self):
print "hello"
Should all Python classes extend object? Are there any potential problems with not extending object?
In Python 2, not inheriting from object will create an old-style class, which, amongst other effects, causes type to give different results:
>>> class Foo: pass
...
>>> type(Foo())
<type 'instance'>
vs.
>>> class Bar(object): pass
...
>>> type(Bar())
<class '__main__.Bar'>
Also the rules for multiple inheritance are different in ways that I won't even try to summarize here. All good documentation that I've seen about MI describes new-style classes.
Finally, old-style classes have disappeared in Python 3, and inheritance from object has become implicit. So, always prefer new style classes unless you need backward compat with old software.
In Python 3, classes extend object implicitly, whether you say so yourself or not.
In Python 2, there's old-style and new-style classes. To signal a class is new-style, you have to inherit explicitly from object. If not, the old-style implementation is used.
You generally want a new-style class. Inherit from object explicitly. Note that this also applies to Python 3 code that aims to be compatible with Python 2.
In python 3 you can create a class in three different ways & internally they are all equal (see examples). It doesn't matter how you create a class, all classes in python 3 inherits from special class called object. The class object is fundamental class in python and provides lot of functionality like double-underscore methods, descriptors, super() method, property() method etc.
Example 1.
class MyClass:
pass
Example 2.
class MyClass():
pass
Example 3.
class MyClass(object):
pass
Yes, all Python classes should extend (or rather subclass, this is Python here) object. While normally no serious problems will occur, in some cases (as with multiple inheritance trees) this will be important. This also ensures better compatibility with Python 3.
As other answers have covered, Python 3 inheritance from object is implicit. But they do not state what you should do and what is convention.
The Python 3 documentation examples all use the following style which is convention, so I suggest you follow this for any future code in Python 3.
class Foo:
pass
Source: https://docs.python.org/3/tutorial/classes.html#class-objects
Example quote:
Class objects support two kinds of operations: attribute references
and instantiation.
Attribute references use the standard syntax used for all attribute
references in Python: obj.name. Valid attribute names are all the
names that were in the class’s namespace when the class object was
created. So, if the class definition looked like this:
class MyClass:
"""A simple example class"""
i = 12345
def f(self):
return 'hello world'
Another quote:
Generally speaking, instance variables are for data unique to each
instance and class variables are for attributes and methods shared by
all instances of the class:
class Dog:
kind = 'canine' # class variable shared by all instances
def __init__(self, name):
self.name = name # instance variable unique to each instance
in python3 there isn't a differance, but in python2 not extending object gives you an old-style classes; you'd like to use a new-style class over an old-style class.
I'm not seeing what I expect when I use ABCMeta and abstractmethod.
This works fine in python3:
from abc import ABCMeta, abstractmethod
class Super(metaclass=ABCMeta):
#abstractmethod
def method(self):
pass
a = Super()
TypeError: Can't instantiate abstract class Super ...
And in 2.6:
class Super():
__metaclass__ = ABCMeta
#abstractmethod
def method(self):
pass
a = Super()
TypeError: Can't instantiate abstract class Super ...
They both also work fine (I get the expected exception) if I derive Super from object, in addition to ABCMeta.
They both "fail" (no exception raised) if I derive Super from list.
I want an abstract base class to be a list but abstract, and concrete in sub classes.
Am I doing it wrong, or should I not want this in python?
With Super build as in your working snippets, what you're calling when you do Super() is:
>>> Super.__init__
<slot wrapper '__init__' of 'object' objects>
If Super inherits from list, call it Superlist:
>>> Superlist.__init__
<slot wrapper '__init__' of 'list' objects>
Now, abstract base classes are meant to be usable as mixin classes, to be multiply inherited from (to gain the "Template Method" design pattern features that an ABC may offer) together with a concrete class, without making the resulting descendant abstract. So consider:
>>> class Listsuper(Super, list): pass
...
>>> Listsuper.__init__
<slot wrapper '__init__' of 'list' objects>
See the problem? By the rules of multiple inheritance calling Listsuper() (which is not allowed to fail just because there's a dangling abstract method) runs the same code as calling Superlist() (which you'd like to fail). That code, in practice (list.__init__), does not object to dangling abstract methods -- only object.__init__ does. And fixing that would probably break code that relies on the current behavior.
The suggested workaround is: if you want an abstract base class, all its bases must be abstract. So, instead of having concrete list among your bases, use as a base collections.MutableSequence, add an __init__ that makes a ._list attribute, and implement MutableSequence's abstract methods by direct delegation to self._list. Not perfect, but not all that painful either.
Actually, the issue is with __new__, rather than with __init__. Example:
from abc import ABCMeta, abstractmethod
from collections import OrderedDict
class Foo(metaclass=ABCMeta):
#abstractmethod
def foo(self):
return 42
class Empty:
def __init__(self):
pass
class C1(Empty, Foo): pass
class C2(OrderedDict, Foo): pass
C1() fails with a TypeError as expected, while C2.foo() returns 42.
>>> C1.__init__
<function Empty.__init__ at 0x7fa9a6c01400>
As you can see, it's not using object.__init__ nor is it even invoking its superclass (object) __init__
You can verify it by calling __new__ yourself:
C2.__new__(C2) works just fine, while you'll get the usual TypeError with C1.__new__(C1)
So, imho it's not as clear cut as
if you want an abstract base class, all its bases must be abstract.
While that's a good suggestion, the converse it's not necessarily true: neither OrderedDict nor Empty are abstract, and yet the former's subclass is "concrete", while the latter is "abstract"
If you're wondering, I used OrderedDict in the example instead of list because the latter is a "built-in" type, and thus you cannot do:
OrderedDict.bar = lambda self: 42
And I wanted to make it explicit that the issue is not related to it.