This question already has answers here:
How can I import a module dynamically given its name as string?
(10 answers)
How can I import a module dynamically given the full path?
(35 answers)
Closed 3 months ago.
I have multiple files with a structure like a file example.py:
def initialize(context):
pass
def daj_omacku_teplu(context, data):
pass
def hmataj_pomaly(context, data):
pass
def chvatni_paku(context, data):
pass
def mikaj_laktom(context, data):
pass
and I need to be able to dynamically import methods from "example.py" in a different python file like:
for fn in os.listdir('.'):
if os.path.isfile(fn):
from fn import mikaj_laktom
mikaj_laktom(example_context, sample_data)
For multiple reasons, I can not change the structure of example.py so I need to make a mechanism to load methods and evaluate them. I tried to use importlib but it can only import a class, not file with only methods defined.
Thanks for the help.
Python import does not support importing using paths, so you will need to have the files accessible as modules, see (sys.path). Assuming for now that your sources are located in the same folder as the main script, I would use the following (or similar):
import sys
def load_module(module):
# module_path = "mypackage.%s" % module
module_path = module
if module_path in sys.modules:
return sys.modules[module_path]
return __import__(module_path, fromlist=[module])
# Main script here... Could be your for loop or anything else
# `m` is a reference to the imported module that contains the functions
m = load_module("example")
m.mikaj_laktom(None, [])
The source files can also be part of another package, in which case you will need an __init__.py in the same folder with the .py files (see packages) and you import with "mypackage.module" notation. (Note that the top level folder should be in your path, in the above example this is the folder containing "mypackage")
UDPATE:
As pointed out by #skyking there are lib that can help you do the same thing. See this post
My comment on __init__.py is outdate since things have changed in py3. See this post for some more detailed explanation
You were on the right track with importlib. It can be used to load modules by name, however I do not think you can load them into the global namespace in this way (as in from module import function). So you need to load them as module objects and call your required method:
import glob, importlib, os, pathlib, sys
# The directory containing your modules needs to be on the search path.
MODULE_DIR = '/path/to/modules'
sys.path.append(MODULE_DIR)
# Get the stem names (file name, without directory and '.py') of any
# python files in your directory, load each module by name and run
# the required function.
py_files = glob.glob(os.path.join(MODULE_DIR, '*.py'))
for py_file in py_files:
module_name = pathlib.Path(py_file).stem
module = importlib.import_module(module_name)
module.mikaj_laktom()
Also, be careful using '.' as your MODULE_DIR, as this will presumably try to load the current python file as well, which might cause some unexpected behaviour.
Edit: if using Python2, you won't have pathlib in the standard library, so use
module_name = os.path.splitext(os.path.split(py_file)[1])[0]
to get the equivalent of Path.stem.
Related
I have a file called dns_poison.py that needs to call a package called netscanner. When i try and load the icmpscan module from dns_poison.py I get this message:
ModuleNotFoundError: No module named 'icmpscan'
I've done a sys.path and can confirm that the correct path is in place. The files are located at D:\PythonProjects\Networking\tools and D:\PythonProjects appears when I do a sys.path.
Here is my directory structure:
dns_poison.py
netscanner/
__init__.py
icmpscan.py
Code snippets for the files are as follows:
dns_poison.py
import netscanner
netscanner\__init__.py
from icmpscan import ICMPScan
netscanner\icmpscan.py
class ICMPScan:
def __init__(self, target, count=2, timeout=1):
self.target = target
self.count = count
self.timeout = timeout
self.active_hosts = []
# further code below here....
I don't understand why it cannot find the module, as I've used this exact same method on other python projects without any problems. Any help would be much appreciated.
When you run python dns_poison.py, the importer checks the module path then the local directory and eventually finds your netscanner package that has the following available:
netscanner
netscanner.icmpscan
netscanner.icmpscan.ICMPScan
Now I ask you, where is just icmpscan? The importer cannot find because well, it doesnt exist. The PYTHONPATH exists at wherever dns_poison.py resides, and doesn't append itself to include the absolute path of any imported modules because that simply not how it works. So netscanner can be found because its at the same level as dns_poison.py, but the importer has no clue where icmpscan.py exists because you havent told it. So you have two options to alter your __init__.py:
from .icmpscan import ICMPScan which works with Python 3.x
from netscanner.icmpscan import ICMPScan which works with both Python 2.x/3.x
Couple of references for you:
Python Import System
Python Modules recommend you ref section 6.4.2 Intra-package References
The most simple way to think about this is imports should be handled relative to the program entry-point file. Personally I find this the most simple and fool-proof way of handling import paths.
In your example, I would have:
from netscanner.icmpscan import ICMPScan
In the main file, rather than add it to init.py.
In Python you can reload a module as follows...
import foobar
import importlib
importlib.reload(foobar)
This works for .py files, but for Python packages it will only reload the package and not any of the nested sub-modules.
With a package:
foobar/__init__.py
foobar/spam.py
foobar/eggs.py
Python Script:
import foobar
# assume `spam/__init__.py` is importing `.spam`
# so we dont need an explicit import.
print(foobar.spam) # ok
import importlib
importlib.reload(foobar)
# foobar.spam WONT be reloaded.
Not to suggest this is a bug, but there are times its useful to reload a package and all its submodules. (If you want to edit a module while a script runs for example).
What are some good ways to recursively reload a package in Python?
Notes:
For the purpose of this question assume the latest Python3.x
(currently using importlib)
Allowing that this may requre some edits to the modules themselves.
Assume that wildcard imports aren't used (from foobar import *), since they may complicate reload logic.
Heres a function that recursively loads a package.
Double checked that the reloaded modules are updated in the modules where they are used, and that issues with infinite recursion are checked for.
One restruction is it needs to run on a package (which only makes sense for packages anyway)
import os
import types
import importlib
def reload_package(package):
assert(hasattr(package, "__package__"))
fn = package.__file__
fn_dir = os.path.dirname(fn) + os.sep
module_visit = {fn}
del fn
def reload_recursive_ex(module):
importlib.reload(module)
for module_child in vars(module).values():
if isinstance(module_child, types.ModuleType):
fn_child = getattr(module_child, "__file__", None)
if (fn_child is not None) and fn_child.startswith(fn_dir):
if fn_child not in module_visit:
# print("reloading:", fn_child, "from", module)
module_visit.add(fn_child)
reload_recursive_ex(module_child)
return reload_recursive_ex(package)
# example use
import os
reload_package(os)
I'll offer another answer for the case in which you want to reload only a specific nested module. I found this to be useful for situations where I found myself editing a single subnested module, and reloading all sub-nested modules via a solution like ideasman42's approach or deepreload would produce undesired behavior.
assuming you want to reload a module into the workspace below
my_workspace.ipynb
import importlib
import my_module
import my_other_module_that_I_dont_want_to_reload
print(my_module.test()) #old result
importlib.reload(my_module)
print(my_module.test()) #new result
but my_module.py looks like this:
import my_nested_submodule
def test():
my_nested_submodule.do_something()
and you just made an edit in my_nested_submodule.py:
def do_something():
print('look at this cool new functionality!')
You can manually force my_nested_submodule, and only my_nested_submodule to be reloaded by adjusting my_module.py so it looks like the following:
import my_nested_submodule
import importlib
importlib.reload(my_nested_submodule)
def test():
my_nested_submodule.do_something()
I've updated the answer from #ideasman42 to always reload modules from the bottom of the dependency tree first. Note that it will raise an error if the dependency graph is not a tree (i.e. contains cycles) as I don't think it will be possible to cleanly reload all modules in that case.
import importlib
import os
import types
import pathlib
def get_package_dependencies(package):
assert(hasattr(package, "__package__"))
fn = package.__file__
fn_dir = os.path.dirname(fn) + os.sep
node_set = {fn} # set of module filenames
node_depth_dict = {fn:0} # tracks the greatest depth that we've seen for each node
node_pkg_dict = {fn:package} # mapping of module filenames to module objects
link_set = set() # tuple of (parent module filename, child module filename)
del fn
def dependency_traversal_recursive(module, depth):
for module_child in vars(module).values():
# skip anything that isn't a module
if not isinstance(module_child, types.ModuleType):
continue
fn_child = getattr(module_child, "__file__", None)
# skip anything without a filename or outside the package
if (fn_child is None) or (not fn_child.startswith(fn_dir)):
continue
# have we seen this module before? if not, add it to the database
if not fn_child in node_set:
node_set.add(fn_child)
node_depth_dict[fn_child] = depth
node_pkg_dict[fn_child] = module_child
# set the depth to be the deepest depth we've encountered the node
node_depth_dict[fn_child] = max(depth, node_depth_dict[fn_child])
# have we visited this child module from this parent module before?
if not ((module.__file__, fn_child) in link_set):
link_set.add((module.__file__, fn_child))
dependency_traversal_recursive(module_child, depth+1)
else:
raise ValueError("Cycle detected in dependency graph!")
dependency_traversal_recursive(package, 1)
return (node_pkg_dict, node_depth_dict)
# example use
import collections
node_pkg_dict, node_depth_dict = get_package_dependencies(collections)
for (d,v) in sorted([(d,v) for v,d in node_depth_dict.items()], reverse=True):
print("Reloading %s" % pathlib.Path(v).name)
importlib.reload(node_pkg_dict[v])
I am developing a package that has a file structure similar to the following:
test.py
package/
__init__.py
foo_module.py
example_module.py
If I call import package in test.py, I want the package module to appear similar to this:
>>> vars(package)
mapping_proxy({foo: <function foo at 0x…}, {example: <function example at 0x…})
In other words, I want the members of all modules in package to be in package's namespace, and I do not want the modules themselves to be in the namespace. package is not a sub-package.
Let's say my files look like this:
foo_module.py:
def foo(bar):
return bar
example_module.py:
def example(arg):
return foo(arg)
test.py:
print(example('derp'))
How do I structure the import statements in test.py, example_module.py, and __init__.py to work from outside the package directory (i.e. test.py) and within the package itself (i.e. foo_module.py and example_module.py)? Everything I try gives Parent module '' not loaded, cannot perform relative import or ImportError: No module named 'module_name'.
Also, as a side-note (as per PEP 8): "Relative imports for intra-package imports are highly discouraged. Always use the absolute package path for all imports. Even now that PEP 328 is fully implemented in Python 2.5, its style of explicit relative imports is actively discouraged; absolute imports are more portable and usually more readable."
I am using Python 3.3.
I want the members of all modules in package to be in package's
namespace, and I do not want the modules themselves to be in the
namespace.
I was able to do that by adapting something I've used in Python 2 to automatically import plug-ins to also work in Python 3.
In a nutshell, here's how it works:
The package's __init__.py file imports all the other Python files in the same package directory except for those whose names start with an '_' (underscore) character.
It then adds any names in the imported module's namespace to that of __init__ module's (which is also the package's namespace). Note I had to make the example_module module explicitly import foo from the .foo_module.
One important aspect of doing things this way is realizing that it's dynamic and doesn't require the package module names to be hardcoded into the __init__.py file. Of course this requires more code to accomplish, but also makes it very generic and able to work with just about any (single-level) package — since it will automatically import new modules when they're added and no longer attempt to import any removed from the directory.
test.py:
from package import *
print(example('derp'))
__init__.py:
def _import_all_modules():
""" Dynamically imports all modules in this package. """
import traceback
import os
global __all__
__all__ = []
globals_, locals_ = globals(), locals()
# Dynamically import all the package modules in this file's directory.
for filename in os.listdir(__name__):
# Process all python files in directory that don't start
# with underscore (which also prevents this module from
# importing itself).
if filename[0] != '_' and filename.split('.')[-1] in ('py', 'pyw'):
modulename = filename.split('.')[0] # Filename sans extension.
package_module = '.'.join([__name__, modulename])
try:
module = __import__(package_module, globals_, locals_, [modulename])
except:
traceback.print_exc()
raise
for name in module.__dict__:
if not name.startswith('_'):
globals_[name] = module.__dict__[name]
__all__.append(name)
_import_all_modules()
foo_module.py:
def foo(bar):
return bar
example_module.py:
from .foo_module import foo # added
def example(arg):
return foo(arg)
I think you can get the values you need without cluttering up your namespace, by using from module import name style imports. I think these imports will work for what you are asking for:
Imports for example_module.py:
from package.foo_module import foo
Imports for __init__.py:
from package.foo_module import foo
from package.example_module import example
__all__ = [foo, example] # not strictly necessary, but makes clear what is public
Imports for test.py:
from package import example
Note that this only works if you're running test.py (or something else at the same level of the package hierarchy). Otherwise you'd need to make sure the folder containing package is in the python module search path (either by installing the package somewhere Python will look for it, or by adding the appropriate folder to sys.path).
What would be the best (read: cleanest) way to tell Python to import all modules from some folder?
I want to allow people to put their "mods" (modules) in a folder in my app which my code should check on each startup and import any module put there.
I also don't want an extra scope added to the imported stuff (not "myfolder.mymodule.something", but "something")
If transforming the folder itself in a module, through the use of a __init__.py file and using from <foldername> import * suits you, you can iterate over the folder contents
with "os.listdir" or "glob.glob", and import each file ending in ".py" with the __import__ built-in function:
import os
for name in os.listdir("plugins"):
if name.endswith(".py"):
#strip the extension
module = name[:-3]
# set the module name in the current global name space:
globals()[module] = __import__(os.path.join("plugins", name)
The benefit of this approach is: it allows you to dynamically pass the module names to __import__ - while the ìmport statement needs the module names to be hardcoded, and it allows you to check other things about the files - maybe size, or if they import certain required modules, before importing them.
Create a file named
__init__.py
inside the folder and import the folder name like this:
>>> from <folder_name> import * #Try to avoid importing everything when you can
>>> from <folder_name> import module1,module2,module3 #And so on
You might want to try that project: https://gitlab.com/aurelien-lourot/importdir
With this module, you only need to write two lines to import all plugins from your directory and you don't need an extra __init__.py (or any other other extra file):
import importdir
importdir.do("plugins/", globals())
I have a directory, let's call it Storage full of packages with unwieldy names like mypackage-xxyyzzww, and of course Storage is on my PYTHONPATH. Since packages have long unmemorable names, all of the packages are symlinked to friendlier names, such as mypackage.
Now, I don't want to rely on file system symbolic links to do this, instead I tried mucking around with sys.path and sys.modules. Currently I'm doing something like this:
import imp
imp.load_package('mypackage', 'Storage/mypackage-xxyyzzww')
How bad is it to do things this way, and is there a chance this will break in the future? One funny thing is that there's even no mention of imp.load_package function in the docs.
EDIT: besides not relying on symbolic links, I can't use PYTHONPATH variable anymore.
Instead of using imp, you can assign different names to imported modules.
import mypackage_xxyyzzww as mypackage
If you then create a __init__.py file inside of Storage, you can add several of the above lines to make importing easier.
Storage/__init__.py:
import mypackage_xxyyzzww as mypackage
import otherpackage_xxyyzzww as otherpackage
Interpreter:
>>> from Storage import mypackage, otherpackage
importlib may be more appropriate, as it uses/implements the PEP302 mechanism.
Follow the DictImporter example, but override find_module to find the real filename and store it in the dict, then override load_module to get the code from the found file.
You shouldn't need to use sys.path once you've created your Storage module
#from importlib import abc
import imp
import os
import sys
import logging
logging.basicConfig(level=logging.DEBUG)
dprint = logging.debug
class MyImporter(object):
def __init__(self,path):
self.path=path
self.names = {}
def find_module(self,fullname,path=None):
dprint("find_module({fullname},{path})".format(**locals()))
ml = imp.find_module(fullname,path)
dprint(repr(ml))
raise ImportError
def load_module(self,fullname):
dprint("load_module({fullname})".format(**locals()))
return imp.load_module(fullname)
raise ImportError
def load_storage( path, modname=None ):
if modname is None:
modname = os.path.basename(path)
mod = imp.new_module(modname)
sys.modules[modname] = mod
assert mod.__name__== modname
mod.__path__=[path]
#sys.meta_path.append(MyImporter(path))
mod.__loader__= MyImporter(path)
return mod
if __name__=="__main__":
load_storage("arbitrary-path-to-code/Storage")
from Storage import plain
from Storage import mypkg
Then when you import Storage.mypackage, python will immediately use your importer without bothering to look on sys.path
That doesn't work. The code above does work to import ordinary modules under Storage without requiring Storage to be on sys.path, but both 3.1 and 2.6 seem to ignore the loader attribute mentioned in PEP302.
If I uncomment the sys.meta_path line, 3.1 dies with StackOverflow, and 2.6 dies with ImportError. hmmm... I'm out of time now, but may look at it later.
Packages are just entries in the namespace. You should not name your path components with anything that is not a legal python variable name.