Get fully qualified class name of an object in Python - python

For logging purposes I want to retrieve the fully qualified class name of a Python object. (With fully qualified I mean the class name including the package and module name.)
I know about x.__class__.__name__, but is there a simple method to get the package and module?

With the following program
#!/usr/bin/env python
import foo
def fullname(o):
klass = o.__class__
module = klass.__module__
if module == 'builtins':
return klass.__qualname__ # avoid outputs like 'builtins.str'
return module + '.' + klass.__qualname__
bar = foo.Bar()
print(fullname(bar))
and Bar defined as
class Bar(object):
def __init__(self, v=42):
self.val = v
the output is
$ ./prog.py
foo.Bar
If you're still stuck on Python 2, you'll have to use __name__ instead of __qualname__, which is less informative for nested classes - a class Bar nested in a class Foo will show up as Bar instead of Foo.Bar:
def fullname(o):
klass = o.__class__
module = klass.__module__
if module == '__builtin__':
return klass.__name__ # avoid outputs like '__builtin__.str'
return module + '.' + klass.__name__

The provided answers don't deal with nested classes.
Since Python 3.3 (PEP 3155), you can use __qualname__ of the class instead of the __name__. Otherwise, a class like
class Foo:
class Bar: # this one
pass
will show up as just Bar instead of Foo.Bar.
(You'll still need to attach the __module__ to the qualname separately - __qualname__ is not intended to include module names.)

Here's one based on Greg Bacon's excellent answer, but with a couple of extra checks:
__module__ can be None (according to the docs), and also for a type like str it can be __builtin__ (which you might not want appearing in logs or whatever). The following checks for both those possibilities:
def fullname(o):
module = o.__class__.__module__
if module is None or module == str.__class__.__module__:
return o.__class__.__name__
return module + '.' + o.__class__.__name__
(There might be a better way to check for __builtin__. The above just relies on the fact that str is always available, and its module is always __builtin__)

For python3.7 I use:
".".join([obj.__module__, obj.__name__])
Getting:
package.subpackage.ClassName

Consider using the inspect module which has functions like getmodule which might be what are looking for:
>>>import inspect
>>>import xml.etree.ElementTree
>>>et = xml.etree.ElementTree.ElementTree()
>>>inspect.getmodule(et)
<module 'xml.etree.ElementTree' from
'D:\tools\python2.5.2\lib\xml\etree\ElementTree.pyc'>

Some people (e.g. https://stackoverflow.com/a/16763814/5766934) arguing that __qualname__ is better than __name__.
Here is an example that shows the difference:
$ cat dummy.py
class One:
class Two:
pass
$ python3.6
>>> import dummy
>>> print(dummy.One)
<class 'dummy.One'>
>>> print(dummy.One.Two)
<class 'dummy.One.Two'>
>>> def full_name_with_name(klass):
... return f'{klass.__module__}.{klass.__name__}'
>>> def full_name_with_qualname(klass):
... return f'{klass.__module__}.{klass.__qualname__}'
>>> print(full_name_with_name(dummy.One)) # Correct
dummy.One
>>> print(full_name_with_name(dummy.One.Two)) # Wrong
dummy.Two
>>> print(full_name_with_qualname(dummy.One)) # Correct
dummy.One
>>> print(full_name_with_qualname(dummy.One.Two)) # Correct
dummy.One.Two
Note, it also works correctly for builtins:
>>> print(full_name_with_qualname(print))
builtins.print
>>> import builtins
>>> builtins.print
<built-in function print>

__module__ would do the trick.
Try:
>>> import re
>>> print re.compile.__module__
re
This site suggests that __package__ might work for Python 3.0; However, the examples given there won't work under my Python 2.5.2 console.

This is a hack but I'm supporting 2.6 and just need something simple:
>>> from logging.handlers import MemoryHandler as MH
>>> str(MH).split("'")[1]
'logging.handlers.MemoryHandler'

Since the interest of this topic is to get fully qualified names, here is a pitfall that occurs when using relative imports along with the main module existing in the same package. E.g., with the below module setup:
$ cat /tmp/fqname/foo/__init__.py
$ cat /tmp/fqname/foo/bar.py
from baz import Baz
print Baz.__module__
$ cat /tmp/fqname/foo/baz.py
class Baz: pass
$ cat /tmp/fqname/main.py
import foo.bar
from foo.baz import Baz
print Baz.__module__
$ cat /tmp/fqname/foo/hum.py
import bar
import foo.bar
Here is the output showing the result of importing the same module differently:
$ export PYTHONPATH=/tmp/fqname
$ python /tmp/fqname/main.py
foo.baz
foo.baz
$ python /tmp/fqname/foo/bar.py
baz
$ python /tmp/fqname/foo/hum.py
baz
foo.baz
When hum imports bar using relative path, bar sees Baz.__module__ as just "baz", but in the second import that uses full name, bar sees the same as "foo.baz".
If you are persisting the fully-qualified names somewhere, it is better to avoid relative imports for those classes.

Bellow is just an improvement of Greg Bacon's answer, tested for class, instance, method, function, both builtin and user defined.
def fullname(o):
try:
# if o is a class or function, get module directly
module = o.__module__
except AttributeError:
# then get module from o's class
module = o.__class__.__module__
try:
# if o is a class or function, get name directly
name = o.__qualname__
except AttributeError:
# then get o's class name
name = o.__class__.__qualname__
# if o is a method of builtin class, then module will be None
if module == 'builtins' or module is None:
return name
return module + '.' + name

This is an adaption of the answers by Greg Bacon and MB to use the qualified class name. Note that the question did ask for the qualified class name. It was tested with Python 3.8.
def fullname(obj: object) -> str:
"""Return the full name of the given object using its module and qualified class names."""
# Ref: https://stackoverflow.com/a/66508248/
module_name, class_name = obj.__class__.__module__, obj.__class__.__qualname__
if module_name in (None, str.__class__.__module__):
return class_name
return module_name + "." + class_name

None of the answers here worked for me. In my case, I was using Python 2.7 and knew that I would only be working with newstyle object classes.
def get_qualified_python_name_from_class(model):
c = model.__class__.__mro__[0]
name = c.__module__ + "." + c.__name__
return name

My solution is:
def fullname(obj) -> str:
if type(obj).__qualname__ != "type":
# obj is instance
return ".".join(
[
obj.__class__.__module__,
obj.__class__.__qualname__,
]
)
# obj is not instance
return ".".join([obj.__module__, obj.__qualname__])
# not instance
>>> print(fullname(datetime))
"datetime.datetime"
# instance
>>> print(fullname(datetime.now())
"datetime.datetime"
# instance
>>> print(fullname(3))
"builtins.int"

Related

Is it possible to test a function that uses get_type_hints with a doctest?

I have a function that uses typing.get_type_hints. I want to add a documentation test to it. However, it looks like get_type_hints fails to resolve types that are defined in a doctest.
Here is a simplified example:
import typing
def f(clazz):
"""
>>> class MyClass:
... my_field: 'MyClass'
>>> f(MyClass)
"""
typing.get_type_hints(clazz)
When running it with python3 -m doctest test.py it throws NameError: name 'MyClass' is not defined.
In order to get it to work in doctest, you would need to provide the correct evaluation scope.
Try this:
import typing
def f(clazz, globalns=None, localns=None):
"""
>>> class MyClass:
... my_field: 'MyClass'
>>> f(MyClass, globals(), locals())
"""
typing.get_type_hints(clazz, globalns, localns)
In doctest, a special set of values are used in the "eval scope" that happens with get_typing_hints.
It is looking for "test.MyClass" which doesn't actually exist otherwise.
from __future__ import annotations
import typing
def f(clazz):
"""
>>> test = 1
>>> class MyClass:
... my_field:'MyClass'
>>> f(MyClass)
"""
typing.get_type_hints(clazz)
add from __future__ import annotations at the beginning of the file, it work for me on python3.7

In Python, how do I get the list of classes defined within a particular file?

If a file myfile.py contains:
class A(object):
# Some implementation
class B (object):
# Some implementation
How can I define a method so that, given myfile.py, it returns
[A, B]?
Here, the returned values for A and B can be either the name of the classes or the type of the classes.
(i.e. type(A) = type(str) or type(A) = type(type))
You can get both:
import importlib, inspect
for name, cls in inspect.getmembers(importlib.import_module("myfile"), inspect.isclass):
you may additionally want to check:
if cls.__module__ == 'myfile'
In case it helps someone else. Here is the final solution that I used. This method returns all classes defined in a particular package.
I keep all of the subclasses of X in a particular folder (package) and then, using this method, I can load all the subclasses of X, even if they haven't been imported yet. (If they haven't been imported yet, they cannot be accessible via __all__; otherwise things would have been much easier).
import importlib, os, inspect
def get_modules_in_package(package_name: str):
files = os.listdir(package_name)
for file in files:
if file not in ['__init__.py', '__pycache__']:
if file[-3:] != '.py':
continue
file_name = file[:-3]
module_name = package_name + '.' + file_name
for name, cls in inspect.getmembers(importlib.import_module(module_name), inspect.isclass):
if cls.__module__ == module_name:
yield cls
It's a bit long-winded, but you first need to load the file as a module, then inspect its methods to see which are classes:
import inspect
import importlib.util
# Load the module from file
spec = importlib.util.spec_from_file_location("foo", "foo.py")
foo = importlib.util.module_from_spec(spec)
spec.loader.exec_module(foo)
# Return a list of all attributes of foo which are classes
[x for x in dir(foo) if inspect.isclass(getattr(foo, x))]
Just building on the answers above.
If you need a list of the classes defined within the module (file), i.e. not just those present in the module namespace, and you want the list within that module, i.e. using reflection, then the below will work under both __name__ == __main__ and __name__ == <module> cases.
import sys, inspect
# You can pass a lambda function as the predicate for getmembers()
[name, cls in inspect.getmembers(sys.modules[__name__], lambda x: inspect.isclass(x) and (x.__module__ == __name__))]
In my very specific use case of registering classes to a calling framework, I used as follows:
def register():
myLogger.info(f'Registering classes defined in module {__name__}')
for name, cls in inspect.getmembers(sys.modules[__name__], lambda x: inspect.isclass(x) and (x.__module__ == __name__)):
myLogger.debug(f'Registering class {cls} with name {name}')
<framework>.register_class(cls)

Status of a Python class defined within a function

def myfn():
class MyClass:
pass
return MyClass()
a = myfn()
b = myfn()
print(type(a) is type(b))
Here we can see that type(a) is not type(b). Is this always guaranteed to be the case? Why doesn't the interpreter optimise this since the definition of MyClass doesn't depend on any parameters passed to myfn?
The class statement, when executed always creates a new class object. Classes are not singletons. By putting the class statement in a function just lets you execute it more than once.
Class statements at the module level are executed just once because modules are executed just once, on first import.
You could bypass this by deleting the module object from the sys.modules structure; you'll note that the Foo class imported the third time is a different object after we removed the module:
>>> with open('demomodule.py', 'w') as demomodule:
... demomodule.write('class Foo: pass\n')
...
16
>>> import sys
>>> from demomodule import Foo # first import
>>> id(Foo)
140579578254536
>>> import demomodule # just another reference, module is not run again
>>> id(demomodule.Foo)
140579578254536
>>> del sys.modules['demomodule'] # removing the module object
>>> import demomodule # this causes it to be imported again
>>> id(demomodule.Foo)
140579574812488
The same can happen when you run a module as script then import the same module with import; scripts are run as the __main__ module, using import to import the script again then also creates a separate module object for the imported name:
$ echo 'class Foo: pass
> import demomodule
> print(__name__, id(Foo), id(demomodule.Foo))
> ' > demomodule.py
$ python demomodule.py
demomodule 140718182184264 140718182184264
__main__ 140718182074440 140718182184264
Python is highly dynamic in nature; applying optimisations such as caching a class object produced by a function are fraught with problems. Your function might not take any parameters, but it is not operating in a vacuum. For example, I could replace the __build_class__ hook function and insert an extra class into the bases of any class created anywhere in Python:
>>> def foo_class():
... class Foo: pass
... return Foo
...
>>> foo_class().__mro__
(<class '__main__.foo_class.<locals>.Foo'>, <class 'object'>)
>>> import builtins
>>> class Bar: pass
>>> orig_buildclass = builtins.__build_class__
>>> def my_buildclass(f, name, *bases, **kwargs):
... return orig_buildclass(f, name, *((Bar,) + bases), **kwargs)
...
>>> builtins.__build_class__ = my_buildclass
>>> foo_class().__mro__
(<class '__main__.foo_class.<locals>.Foo'>, <class '__main__.Bar'>, <class 'object'>)
Python is full of hooks like these.

Python, Inject code into module globals

I'm trying to bypass importing from a module, so in my __init__.py I can inject code like this:
globals().update(
{
"foo": lambda: print("Hello stackoverflow!")
}
)
so if I do import mymodule I will be able to call mymodule.foo. That is a simple concept, useless for the purpose because you can actually just define foo.
So, the idea is to modify the globals module dictionary, so in case it doesn't find the function foo it will go wherever and I can inject the code, for that I tried:
from importer import load #a load function to search for the code
from functools import wraps
def global_get_wrapper(f):
#wraps(f)
def wrapper(*args):
module_name, default = args
res = f(*args)
if res is None:
return load(module_name)
return res
return wrapper
globals().get = global_get_wrapper(globals().get) # trying to substitute get method
But it gives me an error:
AttributeError: 'dict' object attribute 'get' is read-only
The other idea I had is to preload the available function, class, etc names into the module dictionary and lazily load them later.
I run out of ideas to accomplish this and I don't know if this is even possible.
Should I go for writing my own python importer? or is there any other possibility I could not think about?
Thanks in advance.
Instead of hacking globals() it would be better to define __getattr__ for your module as follows:
module_name.py
foo = 'foo'
def bar():
return 'bar'
my_module.py
import sys
import module_name
class MyModule(object):
def foobar(self):
return 'foobar'
def __getattr__(self, item):
return getattr(module_name, item)
sys.modules[__name__] = MyModule()
and then:
>>> import my_module
>>> my_module.foo
'foo'
>>> my_module.bar()
'bar'
>>> my_module.foobar()
'foobar'
PEP 562, which targets Python 3.7, introduces __getattr__ for modules. In the rationale it also describes workarounds for previous Python versions.
It is sometimes convenient to customize or otherwise have control over access to module attributes. A typical example is managing deprecation warnings. Typical workarounds are assigning __class__ of a module object to a custom subclass of types.ModuleType or replacing the sys.modules item with a custom wrapper instance. It would be convenient to simplify this procedure by recognizing __getattr__ defined directly in a module that would act like a normal __getattr__ method, except that it will be defined on module instances.
So your mymodule can look like:
foo = 'bar'
def __getattr__(name):
print('load you custom module and return it')
Here's how it behaves:
>>> import mymodule
>>> mymodule.foo
'bar'
>>> mymodule.baz
load you custom module and return it
I don't quite understand. Would this work for you?
try:
mymodule.foo()
except:
print("whatever you wanted to do")

How to change docstring of TestCase Class in Python?

In Python 2.5 (Jython actually), for the UnitTest TestCase Class - there's is no SetUpClass method, and __init__ is not really acceptable (no refference to self).
When I try to change docstring inside the TestCase:
import os
fileName = os.path.split(__file__)[1]
testCaseName = os.path.splitext(fileName)[0]
setattr(__name__, '__doc__', testCaseName)
I'm getting:
setattr(__name__, '__doc__', testCaseName)
TypeError: readonly attribute
I tried to change the docstring by instantiate it into an object (where self.__doc__ is writable).
UPDATED: but I want to avoid additional coding
in the sub-class (i.e. inheriting
super-class function to set docstring
of sub-class), for example:
File DynamicTestCase.py includes:
class DynamicTestCase(unittest.TestCase):
def setDocstring(self, testCaseDocstring=None):
if not testCaseDocstring:
fileName = os.path.split(__file__)[1]
testCaseDocstring = os.path.splitext(fileName)[0]
setattr(self, '__doc__', testCaseDocstring)
File MyTestCase.py includes:
class MyTestCase(DynamicTestCase):
def test_print_docstring(self):
self.setDocstring()
print 'MyTestCase Docstring = ', self.__doc__
But still, the unittest run result is:
MyTestCase Docstring = DynamicTestCase
When I expected MyTestCase Docstring = MyTestCase
Updated - __file__ is the path name from which the current module was loaded, so naturally using __file__ inside DynamicTestCase.py will result in the path DynamicTestCase.py. However, you can just pass the path into setDocstring() from subclasses like this:
DynamicTestCase.py:
class DynamicTestCase(unittest.TestCase):
def setDocstring(self, docstring=None):
if docstring is None:
docstring = __file__
if os.path.exists(docstring):
name = os.path.split(docstring)[1]
docstring = os.path.splitext(name)[0]
setattr(self, '__doc__', docstring)
MyTestCase.py:
class MyTestCase(DynamicTestCase):
def __init__(self, *args, **kwargs):
DynamicTestCase.__init__(self, *args, **kwargs)
self.setDocstring(__file__)
def test_print_docstring(self):
print 'MyTestCase Docstring = ', self.__doc__
def test_new_docstring(self):
self.setDocstring('hello')
print 'MyTestCase Docstring = ', self.__doc__
Output:
MyTestCase Docstring = MyTestCase
MyTestCase Docstring = hello
Rest of answer
In your original code above __name__ is a string, not a class. Jython rightly rejects altering the __doc__ attribute on the str type.
Could you explain a bit about why you want to change TestCase's docstring? For example, you could subclass TestCase and give your own docstring:
class MyTestCase(unittest.TestCase):
"Docstring of MyTestCase"
Not sure if you've tried it yet, but the unittest2 package's TestCase has setUpClass, tearDownClass class methods. It's a backport of Python 2.7's improvements to work with Python 2.6 and prior.
Jython allows you to set the __doc__ of new-style classes, but CPython does not. For that reason you might want to find another way to accomplish your goal if you want your code to be portable:
Jython 2.2.1 on java1.6.0_24
>>> unittest.TestCase.__doc__ = 'foo bar'
>>> unittest.TestCase.__doc__
'foo bar'
Python 2.6.6 (r266:84292, Feb 12 2011, 01:07:21)
>>> unittest.TestCase.__doc__ = 'foo bar'
AttributeError: attribute '__doc__' of 'type' objects is not writable
You are grabbing the filename of the DynamicTestCase file, not the file that is calling the function. In order to get that you have to go into it's stack frame:
import inspect
class DynamicTestCase(unittest.TestCase):
def setDocstring(self, testCaseDocstring=None):
if not testCaseDocstring:
fileName = 'unknown.py'
# Go up one stack frame and grab the file name
stack = inspect.stack()
try:
frame = stack[1][0]
fileName = frame.f_code.co_filename
finally:
del stack
testCaseDocstring = os.path.splitext(fileName)[0]

Categories