Is it possible to create a package dynamically, something like:
subpackage = create_subpackage(package_name, package_path)
The package should be associated with a physical path so that modules from that path can be imported through it.
The purpose is to be able to have subpackages that are not subdirectories of their parent package.
e.g.
main_package/
__init__.py
sub_package/
__init__.py
some_module.py
Contents of main_package/__init__.py:
sub_package = create_subpackage("sub_package", "/a/path/to/sub_package")
globals()["sub_package"] = sub_package
Contents of some_random_script.py
from main_package.sub_package import some_module
While this won't give you exactly the layout you're asking for, this might help: http://docs.python.org/tutorial/modules.html#packages-in-multiple-directories
Basically, each package has a __path__ attribute that contains a list of places to search for submodules. And you can modify it to your liking.
e.g.
main_package/__init__.py:
__path__ += ['/tmp/some/other/path/']
/tmp/some/other/path/sub_package/__init__.py:
value = 42
test.py:
from main_package.sub_package import value
print value
If that doesn't cut it, you can go read up on import hooks, the all-powerful (and correspondingly complicated) way to modify Python's import behavior.
Related
So I have a tree of modules and packages, it's two levels deep.
My goal, at top level, in main.py, is to refer to a sub-module using this syntax
import pkg1
pkg1.bebrs.fn_3.run()
the dot, dot, dot notation is very important to me.
inside the __init__.py inside the bebrs folder, my init states
from pkg1.bebrs import fn_3
it explicitly states the name of parent and current package.
Is there any way to type the pkg1 and bebrs in a more generalized way inside from pkg1.bebrs import fn_3? I wish I could just put generic __init__.py files and be able to import nested stuff with dot notation instead of having a very explicit nested import inside each nested __init__.py file.
I am having an issue with the __import__ method. It seems to only import the base directory of the module, but not the file.
For instance I have:
test_suite/assert_array_length.py
when I pass this into __import__:
moduleLocation = "test_suite.assert_array_length"
module = __import__(moduleLocation)
print module
I am getting:
[sub_directories]/test_suite/__init__.pyc
The call sequence is going from run_tests.py to test_runner.py. test_runner.py then imports assert_array_length.py. They are laid out like this:
run_tests.py
|-----------test_runner.py
|-----------assert_array_length.py
because it's importing the __init__.py, I can't get what I need from the assert_array_length.py file.
__import__ imports the module you asked for. However, if you checked the documentation, you would find the following:
When the name variable is of the form package.module, normally, the top-level package (the name up till the first dot) is returned, not the module named by name.
You may prefer importlib.import_module, which will return package.module instead of package if you tell it to import package.module.
I have a module structured as follows:
/module
__init__.py
/submod_1
__init__.py
submod_1_class.py
/submod_2
__init__.py
submod_2_class.py
but I find it incredibly annoying to have to import a class within submod_1_class.py with:
from module.submod_1.submod_1_class import my_class
What I would prefer to be able to type is:
from module import my_class
I have browsed through the site-packages folder and looked through popular modules like numpy, but I haven't been able to figure out how, for example:
import numpy
a = numpy.array([1,2,3,4,5])
can be used when the definition of numpy array objects is buried deep within several subfolders of the numpy package.
You can use __init__.py files to define what can be imported from a given module. A very simple addition to your structure, for example, would draw up Class from submod_1_class such that external users can simply from module import Class.
/module
__init__.py
from submod_1 import Class
/submod_1
__init__.py
from submod_1_class import Class
submod_1_class.py
/submod_2
submod_2_class.py
In numpy, for example, the top-level __init__.py contains a line that reads:
from .core import *
This means everything defined within /core/__init__.py is available externally directly in the numpy namespace, even though it may actually be buried deep in some complex structure.
I'm writing a module to load a dataset. I want to keep the interface/API as clean as possible - so I've made internal functions and variables hidden by prefacing their names with __. Awesome. My module, however, imports other packages (e.g. numpy) which still appear in my module's namespace, how can I avoid this?
i.e. my file looks something like:
Loader.py:
import numpy as np
__INTERNAL_VAR1 = True
EXTERNAL_VAR = True
def loadData():
data = __INTERNAL_FUNC1()
...
return data
def __INTERNAL_FUNC1():
...
return data
and when I import my module np is exposed:
> import Loader
> Loader.[TAB]
Loader.EXTERNAL_VAR Loader.loadData Loader.np
If the autocompletion you are using is correctly implemented, it should honour the __all__ attribute of modules, if set.
Add a list of all names your module exports in that name:
__all__ = ['loadData', 'EXTERNAL_VAR']
The __all__ variable is used to determine what names are imported if you use a from modulename import * wildcard import, as well as by the help() function when documenting your module.
There is no point in using double-underscore names as globals; it is a single underscore at the start that marks such names as 'internal' (by convention).
Another solution could be to create an __init__.py that contains a line that imports the functions and variables you need:
from Loader import EXTERNAL_VAR,loadData
This file should be placed inside a folder containing your module:
└── Loader
├── __init__.py
└── Loader.py
Now when you import Loader you can access directly only EXTERNAL_VAR and loadData while all the other classes and modules are in Loader.Loader.
This solution should be independent of your autocompletion system.
consider this:
/
test.py
lib/
L __init__.py
+ x/
L __init__.py
L p.py
with p.py:
class P():
pass
p1 = P()
With test.py:
import sys
import os
sys.path.append(os.path.join(os.path.dirname(os.path.abspath(__file__)), "lib"))
import lib.x.p
import x.p
print(id(lib.x.p.p1))
print(id(x.p.p1))
Here I get different object IDs though I am importing the same object from the same package/module Can someone please explain this behaviour, as it is very confusing, and I did not find any documentation about it.
Thanks!
Modules are cached in the dicitonary sys.modules using their dotted names as keys. Since you are importing the same module by two different dotted names, you end up with two copies of this module, and also with two copies of everything inside them.
The solution is easy: Don't do this, and try to avoid messing around with sys.path.
x.p and lib.x.p aren't the same module. They come from the same file, but Python doesn't determine a module's identity by its file; a module's identity is based on its package-qualified name. The module search logic may have found the same file for both modules, but they're still loaded and executed separately, and objects created in one module are distinct from objects created in another.