Are there any ways to obtain a path of a file corresponding to a function object which is passed to a decorator function?
Finally I need a directory of the file.
def mydec(arg):
def dec_inner(func):
def wrapper(*args, **kwargs):
# how to detect a path where func is defined?
return wrapper
return dec_inner
You can find the name of the module a function comes from using the __module__ attribute:
>>> from random import choice
>>> choice.__module__
'random'
You can get the module from its name via the sys.modules dictionary:
>>> sys.modules['random']
<module 'random' from 'C:\Python27\lib\random.pyc'>
And the file path itself from the module attribute __file__. Putting all of that together:
>>> sys.modules[choice.__module__].__file__
'C:\\Python27\\lib\\random.pyc'
Related
If a file myfile.py contains:
class A(object):
# Some implementation
class B (object):
# Some implementation
How can I define a method so that, given myfile.py, it returns
[A, B]?
Here, the returned values for A and B can be either the name of the classes or the type of the classes.
(i.e. type(A) = type(str) or type(A) = type(type))
You can get both:
import importlib, inspect
for name, cls in inspect.getmembers(importlib.import_module("myfile"), inspect.isclass):
you may additionally want to check:
if cls.__module__ == 'myfile'
In case it helps someone else. Here is the final solution that I used. This method returns all classes defined in a particular package.
I keep all of the subclasses of X in a particular folder (package) and then, using this method, I can load all the subclasses of X, even if they haven't been imported yet. (If they haven't been imported yet, they cannot be accessible via __all__; otherwise things would have been much easier).
import importlib, os, inspect
def get_modules_in_package(package_name: str):
files = os.listdir(package_name)
for file in files:
if file not in ['__init__.py', '__pycache__']:
if file[-3:] != '.py':
continue
file_name = file[:-3]
module_name = package_name + '.' + file_name
for name, cls in inspect.getmembers(importlib.import_module(module_name), inspect.isclass):
if cls.__module__ == module_name:
yield cls
It's a bit long-winded, but you first need to load the file as a module, then inspect its methods to see which are classes:
import inspect
import importlib.util
# Load the module from file
spec = importlib.util.spec_from_file_location("foo", "foo.py")
foo = importlib.util.module_from_spec(spec)
spec.loader.exec_module(foo)
# Return a list of all attributes of foo which are classes
[x for x in dir(foo) if inspect.isclass(getattr(foo, x))]
Just building on the answers above.
If you need a list of the classes defined within the module (file), i.e. not just those present in the module namespace, and you want the list within that module, i.e. using reflection, then the below will work under both __name__ == __main__ and __name__ == <module> cases.
import sys, inspect
# You can pass a lambda function as the predicate for getmembers()
[name, cls in inspect.getmembers(sys.modules[__name__], lambda x: inspect.isclass(x) and (x.__module__ == __name__))]
In my very specific use case of registering classes to a calling framework, I used as follows:
def register():
myLogger.info(f'Registering classes defined in module {__name__}')
for name, cls in inspect.getmembers(sys.modules[__name__], lambda x: inspect.isclass(x) and (x.__module__ == __name__)):
myLogger.debug(f'Registering class {cls} with name {name}')
<framework>.register_class(cls)
I'm trying to bypass importing from a module, so in my __init__.py I can inject code like this:
globals().update(
{
"foo": lambda: print("Hello stackoverflow!")
}
)
so if I do import mymodule I will be able to call mymodule.foo. That is a simple concept, useless for the purpose because you can actually just define foo.
So, the idea is to modify the globals module dictionary, so in case it doesn't find the function foo it will go wherever and I can inject the code, for that I tried:
from importer import load #a load function to search for the code
from functools import wraps
def global_get_wrapper(f):
#wraps(f)
def wrapper(*args):
module_name, default = args
res = f(*args)
if res is None:
return load(module_name)
return res
return wrapper
globals().get = global_get_wrapper(globals().get) # trying to substitute get method
But it gives me an error:
AttributeError: 'dict' object attribute 'get' is read-only
The other idea I had is to preload the available function, class, etc names into the module dictionary and lazily load them later.
I run out of ideas to accomplish this and I don't know if this is even possible.
Should I go for writing my own python importer? or is there any other possibility I could not think about?
Thanks in advance.
Instead of hacking globals() it would be better to define __getattr__ for your module as follows:
module_name.py
foo = 'foo'
def bar():
return 'bar'
my_module.py
import sys
import module_name
class MyModule(object):
def foobar(self):
return 'foobar'
def __getattr__(self, item):
return getattr(module_name, item)
sys.modules[__name__] = MyModule()
and then:
>>> import my_module
>>> my_module.foo
'foo'
>>> my_module.bar()
'bar'
>>> my_module.foobar()
'foobar'
PEP 562, which targets Python 3.7, introduces __getattr__ for modules. In the rationale it also describes workarounds for previous Python versions.
It is sometimes convenient to customize or otherwise have control over access to module attributes. A typical example is managing deprecation warnings. Typical workarounds are assigning __class__ of a module object to a custom subclass of types.ModuleType or replacing the sys.modules item with a custom wrapper instance. It would be convenient to simplify this procedure by recognizing __getattr__ defined directly in a module that would act like a normal __getattr__ method, except that it will be defined on module instances.
So your mymodule can look like:
foo = 'bar'
def __getattr__(name):
print('load you custom module and return it')
Here's how it behaves:
>>> import mymodule
>>> mymodule.foo
'bar'
>>> mymodule.baz
load you custom module and return it
I don't quite understand. Would this work for you?
try:
mymodule.foo()
except:
print("whatever you wanted to do")
I have class called 'my_class' placed in 'my_module'. And I need to import this class. I tried to do it like this:
import importlib
result = importlib.import_module('my_module.my_class')
but it says:
ImportError: No module named 'my_module.my_class'; 'my_module' is not a package
So. As I can see it works only for modules, but can't handle classes. How can I import a class from a module?
It is expecting my_module to be a package containing a module named 'my_class'. If you need to import a class, or an attribute in general, dynamically, just use getattr after you import the module:
cls = getattr(import_module('my_module'), 'my_class')
Also, yes, it does only work with modules. Remember importlib.import_module is a wrapper of the internal importlib.__import__ function. It doesn't offer the same amount of functionality as the full import statement which, coupled with from, performs an attribute look-up on the imported module.
import importlib
import logging
logger = logging.getLogger(__name__)
def factory(module_class_string, super_cls: type = None, **kwargs):
"""
:param module_class_string: full name of the class to create an object of
:param super_cls: expected super class for validity, None if bypass
:param kwargs: parameters to pass
:return:
"""
module_name, class_name = module_class_string.rsplit(".", 1)
module = importlib.import_module(module_name)
assert hasattr(module, class_name), "class {} is not in {}".format(class_name, module_name)
logger.debug('reading class {} from module {}'.format(class_name, module_name))
cls = getattr(module, class_name)
if super_cls is not None:
assert issubclass(cls, super_cls), "class {} should inherit from {}".format(class_name, super_cls.__name__)
logger.debug('initialising {} with params {}'.format(class_name, kwargs))
obj = cls(**kwargs)
return obj
I want to refer to an object in the namespace of the file that imports the one that I am writing.
this is an example:
main.py
from imp import * # main is importing the file I'm writing
...more code...
obj=1 # main defines obj
f() # f(), defined in imp, needs to use obj
...more code using obj...
This is the file that defines f():
imp.py
def f():
return obj # I want to refer to main's obj here
error on runtime:
error: global name 'obj' is not defined
How can it be done?
Thanks.
Relying on global variables across modules is not really a good idea. You should pass obj as a parameter to the function f(), like this:
f(obj)
Then just declare the parameter in the function:
def f(obj):
# code to operate on obj
return obj
I am trying to load the function in a remote environment using cPickle. But I got the
error "the 'module' object has no attribute ..." . Where I really stuck is the namespace has
already contain that attributes , even though it fails to load
Please Help
import inspect
import cPickle as pickle
from run import run
def get_source(func):
sourcelines = inspect.getsourcelines(func)[0]
sourcelines[0] = sourcelines[0].lstrip()
return "".join(sourcelines)
def fun(f):
return f()
def fun1():
return 10
funcs = (fun, fun1)
sources = [get_source(func) for func in funcs]
funcs_serialized = pickle.dumps((fun.func_name,sources),0)
args_serialized = pickle.dumps(fun1,0)
#Creating the Environment where fun & fun1 doesnot exist
del globals()['fun']
del globals()['fun1']
r = run()
r.work(funcs_serialized,args_serialized)
Here is run.py
import cPickle as pickle
class run():
def __init__(self):
pass
def work(self,funcs_serialized,args_serialized):
func, fsources = pickle.loads(funcs_serialized)
fobjs = [compile(fsource, '<string>', 'exec') for fsource in fsources]
#After eval fun and fun1 should be there in globals/locals
for fobj in fobjs:
try:
eval(fobj)
globals().update(locals())
except:
pass
print "Fun1 in Globals: ",globals()['fun1']
print "Fun1 in locals: ",locals()['fun1']
arg = pickle.loads(args_serialized)
The error is
Fun1 in Globals: <function fun1 at 0xb7dae6f4>
Fun1 in locals: <function fun1 at 0xb7dae6f4>
Traceback (most recent call last):
File "fun.py", line 32, in <module>
r.work(funcs_serialized,args_serialized)
File "/home/guest/kathi/python/workspace/run.py", line 23, in work
arg = pickle.loads(args_serialized)
AttributeError: 'module' object has no attribute 'fun1'
I found this link helpful:
http://stefaanlippens.net/python-pickling-and-dealing-with-attributeerror-module-object-has-no-attribute-thing.html
It gives two solutions. The better solution is to add to the head of the loading module (or __main__):
from myclassmodule import MyClass
But I think a better solution should exist.
From http://docs.python.org/library/pickle.html#what-can-be-pickled-and-unpickled:
Note that functions (built-in and
user-defined) are pickled by “fully
qualified” name reference, not by
value. This means that only the
function name is pickled, along with
the name of module the function is
defined in. Neither the function’s
code, nor any of its function
attributes are pickled. Thus the
defining module must be importable in
the unpickling environment, and the
module must contain the named object,
otherwise an exception will be raised.
You deleted the reference to fun1 in the module that defines fun1, thus the error.
The module name of the function is saved into the pickle, when you are doing the loads it is looking for fun1 in __main__ or whereever it was originally
try to add
from your_first_module import fun,fun1
into run.py