I'm writing a piece of reusable code to import where I need it, but it needs some info about what is importing it. I have a workaround that does what I want, but it's a bit ugly. Is there a better way?
Here is a simplified version of what I'm doing.
What I want: Import a method and use it, but look at f in mod2. It needs some info from the importing module.
mod1:
from mod2 import f
f(...)
mod2:
from things_i_want import parent_module, importing_module
def f(*args, **kwargs):
from importing_module.parent_module import models
# ... do some stuff with it, including populating v with a string
v = 'some_string'
m = getattr(importing_module, v, None)
if callable(m)
return m(*args, **kwargs)
My ugly workaround:
mod1:
from mod2 import f as _f
def f(*a, **k):return _f(__name__, globals(), *a, **k)
f(...)
mod2:
def f(module_name, globs, *args, **kwargs):
# find parent modules path
parent_module_path = module_name.split('.')[0:-1]
# find models modules path
models_path = parent_module_path + ['models',]
# import it
models = __import__('.'.join(models_path), {}, {}, [''])
# ... do some stuff with it, including populating v with a string
v = 'some_string'
if v in globs:
return globs[v](*args, **kwargs)
That's a bad idea, because modules are cached.
So if another module, say, mod3.py, also imports mod2, it will get the same mod2 object of the first time. The module is not reimported.
Maybe you imported some other module that imported mod2 before importing mod2 yourself, then you're not the one importing mod2 anymore. Modules are imported only once.
So instead of trying to get who imported the module, you should use another, reusable approach. Perhaps using classes and passing the instance around?
Related
Does the interpreter somehow keep a timestamp of when a module is imported? Or is there an easy way of hooking into the import machinery to do this?
The scenario is a long-running Python process that at various points imports user-provided modules. I would like the process to be able to check "should I restart to load the latest code changes?" by checking the module file's timestamps against the time the module was imported.
Here's a way to automatically have an attribute (named _loadtime in the example code below) added to modules when they're imported. The code is based on Recipe 10.12 titled "Patching Modules on Import" in the book Python Cookbook, by David Beazley and Brian Jones, O'Reilly, 2013, which shows a technique that I adapted to do what you want.
For testing purposes I created this trivial target_module.py file:
print('in target_module')
Here's the example code:
import importlib
import sys
import time
class PostImportFinder:
def __init__(self):
self._skip = set() # To prevent recursion.
def find_module(self, fullname, path=None):
if fullname in self._skip: # Prevent recursion
return None
self._skip.add(fullname)
return PostImportLoader(self)
class PostImportLoader:
def __init__(self, finder):
self._finder = finder
def load_module(self, fullname):
importlib.import_module(fullname)
module = sys.modules[fullname]
# Add a custom attribute to the module object.
module._loadtime = time.time()
self._finder._skip.remove(fullname)
return module
sys.meta_path.insert(0, PostImportFinder())
if __name__ == '__main__':
import time
try:
print('importing target_module')
import target_module
except Exception as e:
print('Import failed:', e)
raise
loadtime = time.localtime(target_module._loadtime)
print('module loadtime: {} ({})'.format(
target_module._loadtime,
time.strftime('%Y-%b-%d %H:%M:%S', loadtime)))
Sample output:
importing target_module
in target_module
module loadtime: 1604683023.2491636 (2020-Nov-06 09:17:03)
I don't think there's any way to get around how hacky this is, but how about something like this every time you import? (I don't know exactly how you're importing):
import time
from types import ModuleType
# create a dictionary to keep track
# filter globals to exclude things that aren't modules and aren't builtins
MODULE_TIMES = {k:None for k,v in globals().items() if not k.startswith("__") and not k.endswith("__") and type(v) == ModuleType}
for module_name in user_module_list:
MODULE_TIMES[module_name] = time.time()
eval("import {0}".format(module_name))
And then you can reference this dictionary in a similar way later.
I'm trying to bypass importing from a module, so in my __init__.py I can inject code like this:
globals().update(
{
"foo": lambda: print("Hello stackoverflow!")
}
)
so if I do import mymodule I will be able to call mymodule.foo. That is a simple concept, useless for the purpose because you can actually just define foo.
So, the idea is to modify the globals module dictionary, so in case it doesn't find the function foo it will go wherever and I can inject the code, for that I tried:
from importer import load #a load function to search for the code
from functools import wraps
def global_get_wrapper(f):
#wraps(f)
def wrapper(*args):
module_name, default = args
res = f(*args)
if res is None:
return load(module_name)
return res
return wrapper
globals().get = global_get_wrapper(globals().get) # trying to substitute get method
But it gives me an error:
AttributeError: 'dict' object attribute 'get' is read-only
The other idea I had is to preload the available function, class, etc names into the module dictionary and lazily load them later.
I run out of ideas to accomplish this and I don't know if this is even possible.
Should I go for writing my own python importer? or is there any other possibility I could not think about?
Thanks in advance.
Instead of hacking globals() it would be better to define __getattr__ for your module as follows:
module_name.py
foo = 'foo'
def bar():
return 'bar'
my_module.py
import sys
import module_name
class MyModule(object):
def foobar(self):
return 'foobar'
def __getattr__(self, item):
return getattr(module_name, item)
sys.modules[__name__] = MyModule()
and then:
>>> import my_module
>>> my_module.foo
'foo'
>>> my_module.bar()
'bar'
>>> my_module.foobar()
'foobar'
PEP 562, which targets Python 3.7, introduces __getattr__ for modules. In the rationale it also describes workarounds for previous Python versions.
It is sometimes convenient to customize or otherwise have control over access to module attributes. A typical example is managing deprecation warnings. Typical workarounds are assigning __class__ of a module object to a custom subclass of types.ModuleType or replacing the sys.modules item with a custom wrapper instance. It would be convenient to simplify this procedure by recognizing __getattr__ defined directly in a module that would act like a normal __getattr__ method, except that it will be defined on module instances.
So your mymodule can look like:
foo = 'bar'
def __getattr__(name):
print('load you custom module and return it')
Here's how it behaves:
>>> import mymodule
>>> mymodule.foo
'bar'
>>> mymodule.baz
load you custom module and return it
I don't quite understand. Would this work for you?
try:
mymodule.foo()
except:
print("whatever you wanted to do")
My Python package depends on an external library for a few of it's functions. This is a non-Python package and can be difficult to install, so I'd like users to still be able to use my package but have it fail when using any functions that depend on this non-Python package.
What is the standard practice for this? I could only import the non-Python package inside the methods that use it, but I really hate doing this
My current setup:
myInterface.py
myPackage/
--classA.py
--classB.py
The interfaces script myInterface.py imports classA and classB and classB imports the non-Python package. If the import fails I print a warning. If myMethod is called and the package isn't installed there will be some error downstream but I do not catch it anywhere, nor do I warn the user.
classB is imported every time the interface script is called so I can't have anything fail there, which is why I included the pass. Like I said above, I could import inside the method and have it fail there, but I really like keeping all of my imports in one place.
From classB.py
try:
import someWeirdPackage
except ImportError:
print("Cannot import someWeirdPackage")
pass
class ClassB():
...
def myMethod():
swp = someWeirdPackage()
...
If you are only importing one external library, I would go for something along these lines:
try:
import weirdModule
available = True
except ImportError:
available = False
def func_requiring_weirdmodule():
if not available:
raise ImportError('weirdModule not available')
...
The conditional and error checking is only needed if you want to give more descriptive errors. If not you can omit it and let python throw the corresponding error when trying to calling a non-imported module, as you do in your current setup.
If multiple functions do use weirdModule, you can wrap the checking into a function:
def require_weird_module():
if not available:
raise ImportError('weirdModule not available')
def f1():
require_weird_module()
...
def f2():
require_weird_module()
...
On the other hand, if you have multiple libraries to be imported by different functions, you can load them dynamically. Although it doesn't look pretty, python caches them and there is nothing wrong with it. I would use importlib
import importlib
def func_requiring_weirdmodule():
weirdModule = importlib.import_module('weirdModule')
Again, if multiple of your functions import complicated external modules you can wrap them into:
def import_external(name):
return importlib.import_module(name)
def f1():
weird1 = import_external('weirdModule1')
def f2():
weird2 = import_external('weirdModule2')
And last, you could create a handler to prevent importing the same module twice, something along the lines of:
class Importer(object):
__loaded__ = {}
#staticmethod
def import_external(name):
if name in Importer.__loaded__:
return Importer.__loaded__[name]
mod = importlib.import_module(name)
Importer.__loaded__[name] = mod
return mod
def f1():
weird = Importer.import_external('weird1')
def f2():
weird = Importer.import_external('weird1')
Although I'm pretty sure that importlib does caching behing the scenes and you don't really need for manual caching.
In short, although it does look ugly, there is nothing wrong with importing modules dynamically in python. In fact, a lot of libraries rely on this. On the other hand, if it is just for an special case of 3 methods accessing 1 external function, do use your approach or my first one in case you cant to add custom sception handling.
I'm not really sure that there's any best practice in this situation, but I would redefine the function if it's not supported:
def warn_import():
print("Cannot import someWeirdPackage")
try:
import someWeirdPackage
external_func = someWeirdPackage
except ImportError:
external_func = warn_import
class ClassB():
def myMethod(self):
swp = external_func()
b = ClassB()
b.myMethod()
You can create two separate classes for the two cases. The first will be used when the the package exist . The second will used when the package does not exist.
class ClassB1():
def myMethod(self):
print("someWeirdPackage exist")
# do something
class ClassB2(ClassB1):
def myMethod(self):
print("someWeirdPackage does not exist")
# do something or raise Exception
try:
import someWeirdPackage
class ClassB(ClassB1):
pass
except ImportError:
class ClassB(ClassB2):
pass
You can also use given below approach to overcome the problem that you're facing.
class UnAvailableName(object):
def __init__(self, name):
self.target = name
def __getattr_(self, attr):
raise ImportError("{} is not available.".format(attr))
try:
import someWeirdPackage
except ImportError:
print("Cannot import someWeirdPackage")
someWeirdPackage = someWeirdPackage("someWeirdPackage")
class ClassB():
def myMethod():
swp = someWeirdPackage.hello()
a = ClassB()
a.myMethod()
Is there any way to make an implicit initializer for modules (not packages)?
Something like:
#file: mymodule.py
def __init__(val):
global value
value = 5
And when you import it:
#file: mainmodule.py
import mymodule(5)
The import statement uses the builtin __import__ function.
Therefore it's not possible to have a module __init__ function.
You'll have to call it yourself:
import mymodule
mymodule.__init__(5)
These things often are not closed as duplicates, so here's a really nice solution from Pass Variable On Import. TL;DR: use a config module, configure that before importing your module.
[...] A cleaner way to do it which is very useful for multiple configuration
items in your project is to create a separate Configuration module
that is imported by your wrapping code first, and the items set at
runtime, before your functional module imports it. This pattern is
often used in other projects.
myconfig/__init__.py :
PATH_TO_R_SOURCE = '/default/R/source/path'
OTHER_CONFIG_ITEM = 'DEFAULT'
PI = 3.14
mymodule/__init__.py :
import myconfig
PATH_TO_R_SOURCE = myconfig.PATH_TO_R_SOURCE
robjects.r.source(PATH_TO_R_SOURCE, chdir = True) ## this takes time
class SomeClass:
def __init__(self, aCurve):
self._curve = aCurve
if myconfig.VERSION is not None:
version = myconfig.VERSION
else:
version = "UNDEFINED"
two_pi = myconfig.PI * 2
And you can change the behaviour of your module at runtime from the
wrapper:
run.py :
import myconfig
myconfig.PATH_TO_R_SOURCE = 'actual/path/to/R/source'
myconfig.PI = 3.14159
# we can even add a new configuration item that isn't present in the original myconfig:
myconfig.VERSION="1.0"
import mymodule
print "Mymodule.two_pi = %r" % mymodule.two_pi
print "Mymodule.version is %s" % mymodule.version
Output:
> Mymodule.two_pi = 6.28318
> Mymodule.version is 1.0
I know that if I import a module by name import(moduleName), then I can reload it with reload(moduleName)
But, I am importing a bunch of modules with a Kleene star:
from proj import *
How can I reload them in this case?
I think there's a way to reload all python modules. The code for Python 2.7 is listed below: Instead of importing the math module with an asterisk, you can import whatever you need.
from math import *
from sys import *
Alfa = modules.keys()
modules.clear()
for elem in Alfa:
str = 'from '+elem+' import *'
try:
exec(str)
except:
pass
This is a complicated and confusing issue. The method I give will reload the module and refresh the variables in the given context. However, this method will fall over if you have multiple modules using a starred import on the given module as they will retain their original values instead of updating. In generally, even having to reload a module is something you shouldn't be doing, with the exception of when working with a REPL. Modules aren't something that should be dynamic. You should consider other ways of providing the updates you need.
import sys
def reload_starred(module_name, context):
if context in sys.modules:
context = vars(sys.modules[context])
module = sys.modules[module_name]
for name in get_public_module_variables(module):
try:
del context[name]
except KeyError:
pass
module = reload(module)
context.update(get_public_module_variables(module))
def get_public_module_variables(module):
if hasattr(module, "__all__"):
return dict((k,v) for (k,v) in vars(module).items()
if k in module.__all__)
else:
return dict((k,v) for (k,v) in vars(module).items()
if not k.startswith("_"))
Usage:
reload_starred("my_module", __name__)
reload_starred("my_module", globals())
reload_starred("my_module", "another_module")
def function():
from my_module import *
...
reload_starred("my_module", locals())