I want to inherit from a class in a file that lies in a directory above the current one.
Is it possible to relatively import that file?
from ..subpkg2 import mod
Per the Python docs: When inside a package hierarchy, use two dots, as the import statement doc says:
When specifying what module to import you do not have to specify the absolute name of the module. When a module or package is contained within another package it is possible to make a relative import within the same top package without having to mention the package name. By using leading dots in the specified module or package after from you can specify how high to traverse up the current package hierarchy without specifying exact names. One leading dot means the current package where the module making the import exists. Two dots means up one package level. Three dots is up two levels, etc. So if you execute from . import mod from a module in the pkg package then you will end up importing pkg.mod. If you execute from ..subpkg2 import mod from within pkg.subpkg1 you will import pkg.subpkg2.mod. The specification for relative imports is contained within PEP 328.
PEP 328 deals with absolute/relative imports.
import sys
sys.path.append("..") # Adds higher directory to python modules path.
#gimel's answer is correct if you can guarantee the package hierarchy he mentions. If you can't -- if your real need is as you expressed it, exclusively tied to directories and without any necessary relationship to packaging -- then you need to work on __file__ to find out the parent directory (a couple of os.path.dirname calls will do;-), then (if that directory is not already on sys.path) prepend temporarily insert said dir at the very start of sys.path, __import__, remove said dir again -- messy work indeed, but, "when you must, you must" (and Pyhon strives to never stop the programmer from doing what must be done -- just like the ISO C standard says in the "Spirit of C" section in its preface!-).
Here is an example that may work for you:
import sys
import os.path
sys.path.append(
os.path.abspath(os.path.join(os.path.dirname(__file__), os.path.pardir)))
import module_in_parent_dir
Import module from a directory which is exactly one level above the current directory:
from .. import module
How to load a module that is a directory up
preface: I did a substantial rewrite of a previous answer with the hopes of helping ease people into python's ecosystem, and hopefully give everyone the best change of success with python's import system.
This will cover relative imports within a package, which I think is the most probable case to OP's question.
Python is a modular system
This is why we write import foo to load a module "foo" from the root namespace, instead of writing:
foo = dict(); # please avoid doing this
with open(os.path.join(os.path.dirname(__file__), '../foo.py') as foo_fh: # please avoid doing this
exec(compile(foo_fh.read(), 'foo.py', 'exec'), foo) # please avoid doing this
Python isn't coupled to a file-system
This is why we can embed python in environment where there isn't a defacto filesystem without providing a virtual one, such as Jython.
Being decoupled from a filesystem lets imports be flexible, this design allows for things like imports from archive/zip files, import singletons, bytecode caching, cffi extensions, even remote code definition loading.
So if imports are not coupled to a filesystem what does "one directory up" mean? We have to pick out some heuristics but we can do that, for example when working within a package, some heuristics have already been defined that makes relative imports like .foo and ..foo work within the same package. Cool!
If you sincerely want to couple your source code loading patterns to a filesystem, you can do that. You'll have to choose your own heuristics, and use some kind of importing machinery, I recommend importlib
Python's importlib example looks something like so:
import importlib.util
import sys
# For illustrative purposes.
file_path = os.path.join(os.path.dirname(__file__), '../foo.py')
module_name = 'foo'
foo_spec = importlib.util.spec_from_file_location(module_name, file_path)
# foo_spec is a ModuleSpec specifying a SourceFileLoader
foo_module = importlib.util.module_from_spec(foo_spec)
sys.modules[module_name] = foo_module
foo_spec.loader.exec_module(foo_module)
foo = sys.modules[module_name]
# foo is the sys.modules['foo'] singleton
Packaging
There is a great example project available officially here: https://github.com/pypa/sampleproject
A python package is a collection of information about your source code, that can inform other tools how to copy your source code to other computers, and how to integrate your source code into that system's path so that import foo works for other computers (regardless of interpreter, host operating system, etc)
Directory Structure
Lets have a package name foo, in some directory (preferably an empty directory).
some_directory/
foo.py # `if __name__ == "__main__":` lives here
My preference is to create setup.py as sibling to foo.py, because it makes writing the setup.py file simpler, however you can write configuration to change/redirect everything setuptools does by default if you like; for example putting foo.py under a "src/" directory is somewhat popular, not covered here.
some_directory/
foo.py
setup.py
.
#!/usr/bin/env python3
# setup.py
import setuptools
setuptools.setup(
name="foo",
...
py_modules=['foo'],
)
.
python3 -m pip install --editable ./ # or path/to/some_directory/
"editable" aka -e will yet-again redirect the importing machinery to load the source files in this directory, instead copying the current exact files to the installing-environment's library. This can also cause behavioral differences on a developer's machine, be sure to test your code!
There are tools other than pip, however I'd recommend pip be the introductory one :)
I also like to make foo a "package" (a directory containing __init__.py) instead of a module (a single ".py" file), both "packages" and "modules" can be loaded into the root namespace, modules allow for nested namespaces, which is helpful if we want to have a "relative one directory up" import.
some_directory/
foo/
__init__.py
setup.py
.
#!/usr/bin/env python3
# setup.py
import setuptools
setuptools.setup(
name="foo",
...
packages=['foo'],
)
I also like to make a foo/__main__.py, this allows python to execute the package as a module, eg python3 -m foo will execute foo/__main__.py as __main__.
some_directory/
foo/
__init__.py
__main__.py # `if __name__ == "__main__":` lives here, `def main():` too!
setup.py
.
#!/usr/bin/env python3
# setup.py
import setuptools
setuptools.setup(
name="foo",
...
packages=['foo'],
...
entry_points={
'console_scripts': [
# "foo" will be added to the installing-environment's text mode shell, eg `bash -c foo`
'foo=foo.__main__:main',
]
},
)
Lets flesh this out with some more modules:
Basically, you can have a directory structure like so:
some_directory/
bar.py # `import bar`
foo/
__init__.py # `import foo`
__main__.py
baz.py # `import foo.baz
spam/
__init__.py # `import foo.spam`
eggs.py # `import foo.spam.eggs`
setup.py
setup.py conventionally holds metadata information about the source code within, such as:
what dependencies are needed to install named "install_requires"
what name should be used for package management (install/uninstall "name"), I suggest this match your primary python package name in our case foo, though substituting underscores for hyphens is popular
licensing information
maturity tags (alpha/beta/etc),
audience tags (for developers, for machine learning, etc),
single-page documentation content (like a README),
shell names (names you type at user shell like bash, or names you find in a graphical user shell like a start menu),
a list of python modules this package will install (and uninstall)
a defacto "run tests" entry point python ./setup.py test
Its very expansive, it can even compile c extensions on the fly if a source module is being installed on a development machine. For a every-day example I recommend the PYPA Sample Repository's setup.py
If you are releasing a build artifact, eg a copy of the code that is meant to run nearly identical computers, a requirements.txt file is a popular way to snapshot exact dependency information, where "install_requires" is a good way to capture minimum and maximum compatible versions. However, given that the target machines are nearly identical anyway, I highly recommend creating a tarball of an entire python prefix. This can be tricky, too detailed to get into here. Check out pip install's --target option, or virtualenv aka venv for leads.
back to the example
how to import a file one directory up:
From foo/spam/eggs.py, if we wanted code from foo/baz we could ask for it by its absolute namespace:
import foo.baz
If we wanted to reserve capability to move eggs.py into some other directory in the future with some other relative baz implementation, we could use a relative import like:
import ..baz
Here's a three-step, somewhat minimalist version of ThorSummoner's answer for the sake of clarity. It doesn't quite do what I want (I'll explain at the bottom), but it works okay.
Step 1: Make directory and setup.py
filepath_to/project_name/
setup.py
In setup.py, write:
import setuptools
setuptools.setup(name='project_name')
Step 2: Install this directory as a package
Run this code in console:
python -m pip install --editable filepath_to/project_name
Instead of python, you may need to use python3 or something, depending on how your python is installed. Also, you can use -e instead of --editable.
Now, your directory will look more or less like this. I don't know what the egg stuff is.
filepath_to/project_name/
setup.py
test_3.egg-info/
dependency_links.txt
PKG-INFO
SOURCES.txt
top_level.txt
This folder is considered a python package and you can import from files in this parent directory even if you're writing a script anywhere else on your computer.
Step 3. Import from above
Let's say you make two files, one in your project's main directory and another in a sub directory. It'll look like this:
filepath_to/project_name/
top_level_file.py
subdirectory/
subfile.py
setup.py |
test_3.egg-info/ |----- Ignore these guys
... |
Now, if top_level_file.py looks like this:
x = 1
Then I can import it from subfile.py, or really any other file anywhere else on your computer.
# subfile.py OR some_other_python_file_somewhere_else.py
import random # This is a standard package that can be imported anywhere.
import top_level_file # Now, top_level_file.py works similarly.
print(top_level_file.x)
This is different than what I was looking for: I hoped python had a one-line way to import from a file above. Instead, I have to treat the script like a module, do a bunch of boilerplate, and install it globally for the entire python installation to have access to it. It's overkill. If anyone has a simpler method than doesn't involve the above process or importlib shenanigans, please let me know.
Polished answer of #alex-martelli with pathlib:
import pathlib
import sys
_parentdir = pathlib.Path(__file__).parent.parent.resolve()
sys.path.insert(0, str(_parentdir))
import module_in_parent_dir
sys.path.remove(str(_parentdir))
To run python /myprogram/submodule/mymodule.py which imports /myprogram/mainmodule.py, e.g., via
from mainmodule import *
on Linux (e.g., in the python Docker image), I had to add the program root directory to PYTHONPATH:
export PYTHONPATH=/myprogram
It is 2022 and none of the answers really worked for me. Here is what worked in the end
import sys
sys.path.append('../my_class')
import my_class
My directory structure:
src
--my_class.py
notebooks
-- mynotebook.ipynb
I imported my_class from mynotebook.ipynb.
You can use the sys.path.append() method to add the directory containing the package to the list of paths searched for modules. For example, if the package is located two directories above the current directory, you can use the following code:
import sys
sys.path.append("../../")
if the package is location one directory above the current directory, you can use below code:
import sys
sys.path.append("..")
Python is a modular system
Python doesn't rely on a file system
To load python code reliably, have that code in a module, and that module installed in python's library.
Installed modules can always be loaded from the top level namespace with import <name>
There is a great sample project available officially here: https://github.com/pypa/sampleproject
Basically, you can have a directory structure like so:
the_foo_project/
setup.py
bar.py # `import bar`
foo/
__init__.py # `import foo`
baz.py # `import foo.baz`
faz/ # `import foo.faz`
__init__.py
daz.py # `import foo.faz.daz` ... etc.
.
Be sure to declare your setuptools.setup() in setup.py,
official example: https://github.com/pypa/sampleproject/blob/master/setup.py
In our case we probably want to export bar.py and foo/__init__.py, my brief example:
setup.py
#!/usr/bin/env python3
import setuptools
setuptools.setup(
...
py_modules=['bar'],
packages=['foo'],
...
entry_points={},
# Note, any changes to your setup.py, like adding to `packages`, or
# changing `entry_points` will require the module to be reinstalled;
# `python3 -m pip install --upgrade --editable ./the_foo_project
)
.
Now we can install our module into the python library;
with pip, you can install the_foo_project into your python library in edit mode,
so we can work on it in real time
python3 -m pip install --editable=./the_foo_project
# if you get a permission error, you can always use
# `pip ... --user` to install in your user python library
.
Now from any python context, we can load our shared py_modules and packages
foo_script.py
#!/usr/bin/env python3
import bar
import foo
print(dir(bar))
print(dir(foo))
Related
I am an experienced java enterprise developer but very new to python enterprise development shop. I am currently, struggling to understand why some imports work while others don't.
Some background: Our dev team recently upgraded python from 3.6 to 3.10.5 and following is our package structure
src/
bunch of files (dockerfile, Pipfile, requrirements.txt, shell scripts, etc)
package/
__init__.py
moduleA.py
subpackage1/
__init__.py
moduleX.py
moduleY.py
subpackage2/
__init__.py
moduleZ.py
tests/
__init__.py
test1.py
Now, inside the moduleA.py, I am trying to import subpackage2/moduleZ.py like so
from .subpackage2 import moduleZ
But, I get the error saying
ImportError: attempted relative import with no known parent package
The funny thing is that if I move moduleA.py out of package/ and into src/ then it is able to find everything. I am not sure why is this the case.
I run the moduleA.py by executiong python package/moduleA.py.
Now, I read that maybe there is a problem becasue you have you give a -m parameter if running a module as a script (or something on those lines). But, if I do that, I get the following error:
ModuleNotFoundError: No module names 'package/moduleA.py'
I even try running package1/moduleA and remove the .py, but that does not work either. I can understand why as I technically never installed it ?
All of this happened because apparently, the tests broke and to make it work they added relative imports. They changed the import from "from subpackage2 import moduleZ" to "from .subpackage2 import moduleZ" and the tests started working, but the app started failing.
Any understanding I can get would be much appreciated.
The -m parameter is used with the import name, not the path. So you'd use python3 -m package.moduleA (with . instead of /, and no .py), not python3 -m package/moduleA.py.
That said, it only works if package.moduleA is locatable from one of the roots in sys.path. Shy of installing the package, the simplest way to make it work is to ensure your working directory is src (so package exists in the working directory):
$ cd path/to/src
$ python3 -m package.moduleA
and, with your existing setup, if moduleA.py includes a from .subpackage2 import moduleZ, the import should work; Python knows package.moduleA is a module within package, so it can use a relative import to look for a sibling package to moduleA named subpackage2, and then inside it it can find moduleZ.
Obviously, this is brittle (it only works if you cd to the src root directory before running Python, or hack the path to src in PYTHONPATH, which is terrible hack if the code ever has to be run by anyone else); ideally you make this an installable package, install it (in global site-packages, user site-packages, or within a virtual environment created with the built-in venv module or the third-party virtualenv module), and then your working directory no longer matters (since the site-packages will be part of your sys.path automatically). For simple testing, as long as the working directory is correct (not sure what it was for you), and you use -m correctly (you were using it incorrectly), relative imports will work, but it's not the long term solution.
So first of all - the root importing directory is the directory from which you're running the main script.
This directory by default is the root for all imports from all scripts.
So if you're executing script from directory src you can do such imports:
from package.moduleA import *
from package.subpackage1.moduleX import *
But now in files moduleA and moduleX you need to make imports based on root folder. If you want to import something from module moduleY inside moduleX you need to do:
# this is inside moduleX
from package.subpackage1.moduleY import *
This is because python is looking for modules in specific locations.
First location is your root directory - directory from which you execute your main script.
Second location is directory with modules installed by PIP.
You can check all directories using following:
import sys
for p in sys.path:
print(p)
Now to solve your problem there are couple solutions.
The fast one but IMHO not the best one is to add all paths with submodules to sys.path - list variable with all directories where python is looking for modules.
new_path = "/path/to/application/app/folder/src/package/subpackage1"
if new_path not in sys.path:
sys.path.append(new_path)
Another solution is to use full path for imports in all package modules:
from package.subpackage1.moduleX import *
I think in your case it will be the correct solution.
You can also combine 2 solutions.
First add folders with subpackages to sys.path and use subpackage folders as a root folders for imports. But it's good solution only if you have complex submodule structure. And it's not the best solution if in future you will need to deploy your package as a wheel or share between multiple projects.
It seems there are already quite some questions here about relative import in python 3, but after going through many of them I still didn't find the answer for my issue.
so here is the question.
I have a package shown below
package/
__init__.py
A/
__init__.py
foo.py
test_A/
__init__.py
test.py
and I have a single line in test.py:
from ..A import foo
now, I am in the folder of package, and I run
python -m test_A.test
I got message
"ValueError: attempted relative import beyond top-level package"
but if I am in the parent folder of package, e.g., I run:
cd ..
python -m package.test_A.test
everything is fine.
Now my question is:
when I am in the folder of package, and I run the module inside the test_A sub-package as test_A.test, based on my understanding, ..A goes up only one level, which is still within the package folder, why it gives message saying beyond top-level package. What is exactly the reason that causes this error message?
EDIT: There are better/more coherent answers to this question in other questions:
Sibling package imports
Relative imports for the billionth time
Why doesn't it work? It's because python doesn't record where a package was loaded from. So when you do python -m test_A.test, it basically just discards the knowledge that test_A.test is actually stored in package (i.e. package is not considered a package). Attempting from ..A import foo is trying to access information it doesn't have any more (i.e. sibling directories of a loaded location). It's conceptually similar to allowing from ..os import path in a file in math. This would be bad because you want the packages to be distinct. If they need to use something from another package, then they should refer to them globally with from os import path and let python work out where that is with $PATH and $PYTHONPATH.
When you use python -m package.test_A.test, then using from ..A import foo resolves just fine because it kept track of what's in package and you're just accessing a child directory of a loaded location.
Why doesn't python consider the current working directory to be a package? NO CLUE, but gosh it would be useful.
import sys
sys.path.append("..") # Adds higher directory to python modules path.
Try this.
Worked for me.
Assumption:
If you are in the package directory, A and test_A are separate packages.
Conclusion:
..A imports are only allowed within a package.
Further notes:
Making the relative imports only available within packages is useful if you want to force that packages can be placed on any path located on sys.path.
EDIT:
Am I the only one who thinks that this is insane!? Why in the world is the current working directory not considered to be a package? – Multihunter
The current working directory is usually located in sys.path. So, all files there are importable. This is behavior since Python 2 when packages did not yet exist. Making the running directory a package would allow imports of modules as "import .A" and as "import A" which then would be two different modules. Maybe this is an inconsistency to consider.
None of these solutions worked for me in 3.6, with a folder structure like:
package1/
subpackage1/
module1.py
package2/
subpackage2/
module2.py
My goal was to import from module1 into module2. What finally worked for me was, oddly enough:
import sys
sys.path.append(".")
Note the single dot as opposed to the two-dot solutions mentioned so far.
Edit: The following helped clarify this for me:
import os
print (os.getcwd())
In my case, the working directory was (unexpectedly) the root of the project.
This is very tricky in Python.
I'll first comment on why you're having that problem and then I will mention two possible solutions.
What's going on?
You must take this paragraph from the Python documentation into consideration:
Note that relative imports are based on the name of the current
module. Since the name of the main module is always "main",
modules intended for use as the main module of a Python application
must always use absolute imports.
And also the following from PEP 328:
Relative imports use a module's name attribute to determine that
module's position in the package hierarchy. If the module's name does
not contain any package information (e.g. it is set to 'main')
then relative imports are resolved as if the module were a top level
module, regardless of where the module is actually located on the file
system.
Relative imports work from the filename (__name__ attribute), which can take two values:
It's the filename, preceded by the folder strucutre, separated by dots.
For eg: package.test_A.test
Here Python knows the parent directories: before test comes test_A and then package.
So you can use the dot notation for relative import.
# package.test_A/test.py
from ..A import foo
You can then have like a root file in the root directory which calls test.py:
# root.py
from package.test_A import test
When you run the module (test.py) directly, it becomes the entry point to the program , so __name__ == __main__. The filename has no indication of the directory structure, so Python doesn't know how to go up in the directory. For Python, test.py becomes the top-level script, there is nothing above it. That's why you cannot use relative import.
Possible Solutions
A) One way to solve this is to have a root file (in the root directory) which calls the modules/packages, like this:
root.py imports test.py. (entry point, __name__ == __main__).
test.py (relative) imports foo.py.
foo.py says the module has been imported.
The output is:
package.A.foo has been imported
Module's name is: package.test_A.test
B) If you want to execute the code as a module and not as a top-level script, you can try this from the command line:
python -m package.test_A.test
Any suggestions are welcomed.
You should also check: Relative imports for the billionth time , specially BrenBarn's answer.
from package.A import foo
I think it's clearer than
import sys
sys.path.append("..")
As the most popular answer suggests, basically its because your PYTHONPATH or sys.path includes . but not your path to your package. And the relative import is relative to your current working directory, not the file where the import happens; oddly.
You could fix this by first changing your relative import to absolute and then either starting it with:
PYTHONPATH=/path/to/package python -m test_A.test
OR forcing the python path when called this way, because:
With python -m test_A.test you're executing test_A/test.py with __name__ == '__main__' and __file__ == '/absolute/path/to/test_A/test.py'
That means that in test.py you could use your absolute import semi-protected in the main case condition and also do some one-time Python path manipulation:
from os import path
…
def main():
…
if __name__ == '__main__':
import sys
sys.path.append(path.join(path.dirname(__file__), '..'))
from A import foo
exit(main())
This is actually a lot simpler than what other answers make it out to be.
TL;DR: Import A directly instead of attempting a relative import.
The current working directory is not a package, unless you import the folder package from a different folder. So the behavior of your package will work fine if you intend it to be imported by other applications. What's not working is the tests...
Without changing anything in your directory structure, all that needs to be changed is how test.py imports foo.py.
from A import foo
Now running python -m test_A.test from the package directory will run without an ImportError.
Why does that work?
Your current working directory is not a package, but it is added to the path. Therefore you can import folder A and its contents directly. It is the same reason you can import any other package that you have installed... they're all included in your path.
Edit: 2020-05-08: Is seems the website I quoted is no longer controlled by the person who wrote the advice, so I'm removing the link to the site. Thanks for letting me know baxx.
If someone's still struggling a bit after the great answers already provided, I found advice on a website that no longer is available.
Essential quote from the site I mentioned:
"The same can be specified programmatically in this way:
import sys
sys.path.append('..')
Of course the code above must be written before the other import
statement.
It's pretty obvious that it has to be this way, thinking on it after the fact. I was trying to use the sys.path.append('..') in my tests, but ran into the issue posted by OP. By adding the import and sys.path defintion before my other imports, I was able to solve the problem.
Just remove .. in test.py
For me pytest works fine with that
Example:
from A import foo
if you have an __init__.py in an upper folder, you can initialize the import as
import file/path as alias in that init file. Then you can use it on lower scripts as:
import alias
In my case, I had to change to this:
Solution 1(more better which depend on current py file path. Easy to deploy)
Use pathlib.Path.parents make code cleaner
import sys
import os
import pathlib
target_path = pathlib.Path(os.path.abspath(__file__)).parents[3]
sys.path.append(target_path)
from utils import MultiFileAllowed
Solution 2
import sys
import os
sys.path.append(os.getcwd())
from utils import MultiFileAllowed
In my humble opinion, I understand this question in this way:
[CASE 1] When you start an absolute-import like
python -m test_A.test
or
import test_A.test
or
from test_A import test
you're actually setting the import-anchor to be test_A, in other word, top-level package is test_A . So, when we have test.py do from ..A import xxx, you are escaping from the anchor, and Python does not allow this.
[CASE 2] When you do
python -m package.test_A.test
or
from package.test_A import test
your anchor becomes package, so package/test_A/test.py doing from ..A import xxx does not escape the anchor(still inside package folder), and Python happily accepts this.
In short:
Absolute-import changes current anchor (=redefines what is the top-level package);
Relative-import does not change the anchor but confines to it.
Furthermore, we can use full-qualified module name(FQMN) to inspect this problem.
Check FQMN in each case:
[CASE2] test.__name__ = package.test_A.test
[CASE1] test.__name__ = test_A.test
So, for CASE2, an from .. import xxx will result in a new module with FQMN=package.xxx, which is acceptable.
While for CASE1, the .. from within from .. import xxx will jump out of the starting node(anchor) of test_A, and this is NOT allowed by Python.
[2022-07-19] I think this "relative-import" limitation is quite an ugly design, totally against (one of) Python's motto "Simple is better than complex".
Not sure in python 2.x but in python 3.6, assuming you are trying to run the whole suite, you just have to use -t
-t, --top-level-directory directory
Top level directory of project (defaults to start directory)
So, on a structure like
project_root
|
|----- my_module
| \
| \_____ my_class.py
|
\ tests
\___ test_my_func.py
One could for example use:
python3 unittest discover -s /full_path/project_root/tests -t /full_path/project_root/
And still import the my_module.my_class without major dramas.
Having
package/
__init__.py
A/
__init__.py
foo.py
test_A/
__init__.py
test.py
in A/__init__.py import foo:
from .foo import foo
when importing A/ from test_A/
import sys, os
sys.path.append(os.path.abspath('../A'))
# then import foo
import foo
The following structure (in Python 3.7) is not allowing me to import class A in module B:
package:
package:
__init__.py
a:
__init__.py
a.py
b:
__init__.py
b.py
The top-level __init__.py is blank. Here are the remaining files:
a
# package/package/a/__init__.py
from .a import A
# package/package/a/a.py
class A:
def __init__(self):
pass
b:
# package/package/b/__init__.py
from .b import B
# package/package/b/b.py
from package.a.a import A
class B:
def __init__(self):
pass
Without doing anything else, on Windows, if I try to run b.py (from within the b folder), I get the following error:
ModuleNotFoundError: No module named 'package.a'
If I add a main.py at the top level:
package:
package:
__init__.py
main.py
a:
__init__.py
a.py
b:
__init__.py
b.py
containing
# package/package/main.py
import a
import b
and run main.py (from within package/package), I get the same error.
If I change b.py to
# package/package/b/b.py
from ..a.a import A
class B:
def __init__(self):
pass
and run b.py (from within the b folder) or main.py (from within package/package), I get the error that
ValueError: attempted relative import beyond top-level package
The python docs make it seem like I should be able to do this though!
Can someone please explain why I am getting these errors? I've seen a couple similar posts to this, but they haven't fixed my problem:
Importing Submodules Python 3
Python submodule importing correctly in python 3.7 but not 3.6
Whatever module is being run by Python is called top-level.
In your shell, when you run > py main.py ($ python3 main.py on Linux), the file main.py is top-level and is called the top-level module.
In the interpreter, the interpreter itself is always top-level, and is called the top-level environment (for proof, type >>> __name__ into the interpreter and it will return '__main__')
Unfortunately (IMO), the term "top-level" is not well-defined in the python docs as it is used in several different contexts. Regardless, it is important to understand that Python always renames __name__ of the top-level entity to '__main__'.
PEP 328 (explained in this SO post) states
relative imports use the module's __name__ attribute to determine its position in the package hierarchy.
Since Python renames the __name__ of the top-level module to '__main__', the top-level module has no package information because it has no dots in its name.
Hence, top-level modules are not considered part of any package (even though they very well may be!). In other words, for packages imported from the current directory, '__main__' determines what is top-level. Packages at the same level as '__main__' (a and b in my example) are top-level packages.
Confusingly, the python docs and PEP 328 give a misleading example. The "correct usages" shown for relative imports are only valid in a specific context.
Recall that import searches paths listed in sys.path to find packages and modules to import. The current directory is in sys.path, along with the paths to builtin packages (like math and os) and installed packages (i.e. pip installed package). Python does not rename the __name__ of non-top-level packages.
Therefore, the python docs and PEP 328 examples are valid only for packages and modules NOT in the top-level directory.
What I should have written was from a.a import A:
# package/package/b/b.py
from a.a import A
class B:
def __init__(self):
pass
Since package is above the top-level module, trying to do an absolute import (like from package.a.a import a) results in an ImportError even though main.py is inside of the package package.
That being said, if you go to PyPI and GitHub and look at released packages, you will find they have absolute imports like import package.a.a! In fact, if I were to publish my package and leave the import as from a.a import A, end users would get an ImportError because they installed package package and don't have a package a! Furthermore, in the current configuration, I'm unable to test with unittest or pytest that users can import and use my package as expected because I cannot do from package.a.a import A myself!
So the question becomes how do you write and test your own custom packages?
The trick is that these packages were written in development mode. To install your package in development mode, run > pip install -e . from the top-level directory (assuming you have a setup.py written; see the NOTE below).
When this is done, python treats your package like a typical library package (i.e. a pip installed package), so python does not change its __name__ to __main__. Thus, you can
import it with absolute imports
test and use your package like an end user would, and
any edits you make to it take effect immediately when run without requiring you re-pip install it, just like packages in the top-level directory do
This key difference between developing packages vs. standalone programs is a huge source of confusion and frustration for most first-time developers (myself included) and is very important to keep in mind. I hope this answer provides clarification for others and may be added to documentation in the future. Please let me know in the comments below if this helped you.
NOTE: pip install -e ., where -e stands for "editable", puts a link (a *.pth file) in your python installation folder so that the package is treated as an installed package, but also that any changes you write in it will take effect immediately (see the Python Packaging Tutorial). Hence, you can use this to develop your own packages or to install and edit third-part packages to your needs. This requires you create a setup.py, but all your test code, client code, etc., will be able to import your package the usual way (i.e. you can treat your package like any other pip installed package). You can achieve the same effect with poetry and flit by configuring your pyproject.toml file.
Here are some additional useful references:
realpython.com: Python Modules and Packages - An Introduction
realpython.com: Python import: Advanced Techniques and Tips
realpython.com: Absolute vs Relative Imports in Python
I've also stumbled upon a similar issue. I've decided to create a new Python import library to solve this and other issues.
The result is ultraimport. It allows file based imports in Python and it does not care about any top-level module. If you know the path, you can import the file. I've used your structure as one of the examples which you can also find in the repository in the examples folder.
After changing your b.py to:
import ultraimport
A = ultraimport('__dir__/../a/a.py', 'A')
print(A)
class B:
def __init__(self):
pass
you can execute it as expected from the b folder:
package/package/b$ python ./b.py
<class 'a.A'>
Also running it through main.py works now:
package/package$ python ./main.py
<class 'a.A'>
It seems there are already quite some questions here about relative import in python 3, but after going through many of them I still didn't find the answer for my issue.
so here is the question.
I have a package shown below
package/
__init__.py
A/
__init__.py
foo.py
test_A/
__init__.py
test.py
and I have a single line in test.py:
from ..A import foo
now, I am in the folder of package, and I run
python -m test_A.test
I got message
"ValueError: attempted relative import beyond top-level package"
but if I am in the parent folder of package, e.g., I run:
cd ..
python -m package.test_A.test
everything is fine.
Now my question is:
when I am in the folder of package, and I run the module inside the test_A sub-package as test_A.test, based on my understanding, ..A goes up only one level, which is still within the package folder, why it gives message saying beyond top-level package. What is exactly the reason that causes this error message?
EDIT: There are better/more coherent answers to this question in other questions:
Sibling package imports
Relative imports for the billionth time
Why doesn't it work? It's because python doesn't record where a package was loaded from. So when you do python -m test_A.test, it basically just discards the knowledge that test_A.test is actually stored in package (i.e. package is not considered a package). Attempting from ..A import foo is trying to access information it doesn't have any more (i.e. sibling directories of a loaded location). It's conceptually similar to allowing from ..os import path in a file in math. This would be bad because you want the packages to be distinct. If they need to use something from another package, then they should refer to them globally with from os import path and let python work out where that is with $PATH and $PYTHONPATH.
When you use python -m package.test_A.test, then using from ..A import foo resolves just fine because it kept track of what's in package and you're just accessing a child directory of a loaded location.
Why doesn't python consider the current working directory to be a package? NO CLUE, but gosh it would be useful.
import sys
sys.path.append("..") # Adds higher directory to python modules path.
Try this.
Worked for me.
Assumption:
If you are in the package directory, A and test_A are separate packages.
Conclusion:
..A imports are only allowed within a package.
Further notes:
Making the relative imports only available within packages is useful if you want to force that packages can be placed on any path located on sys.path.
EDIT:
Am I the only one who thinks that this is insane!? Why in the world is the current working directory not considered to be a package? – Multihunter
The current working directory is usually located in sys.path. So, all files there are importable. This is behavior since Python 2 when packages did not yet exist. Making the running directory a package would allow imports of modules as "import .A" and as "import A" which then would be two different modules. Maybe this is an inconsistency to consider.
None of these solutions worked for me in 3.6, with a folder structure like:
package1/
subpackage1/
module1.py
package2/
subpackage2/
module2.py
My goal was to import from module1 into module2. What finally worked for me was, oddly enough:
import sys
sys.path.append(".")
Note the single dot as opposed to the two-dot solutions mentioned so far.
Edit: The following helped clarify this for me:
import os
print (os.getcwd())
In my case, the working directory was (unexpectedly) the root of the project.
This is very tricky in Python.
I'll first comment on why you're having that problem and then I will mention two possible solutions.
What's going on?
You must take this paragraph from the Python documentation into consideration:
Note that relative imports are based on the name of the current
module. Since the name of the main module is always "main",
modules intended for use as the main module of a Python application
must always use absolute imports.
And also the following from PEP 328:
Relative imports use a module's name attribute to determine that
module's position in the package hierarchy. If the module's name does
not contain any package information (e.g. it is set to 'main')
then relative imports are resolved as if the module were a top level
module, regardless of where the module is actually located on the file
system.
Relative imports work from the filename (__name__ attribute), which can take two values:
It's the filename, preceded by the folder strucutre, separated by dots.
For eg: package.test_A.test
Here Python knows the parent directories: before test comes test_A and then package.
So you can use the dot notation for relative import.
# package.test_A/test.py
from ..A import foo
You can then have like a root file in the root directory which calls test.py:
# root.py
from package.test_A import test
When you run the module (test.py) directly, it becomes the entry point to the program , so __name__ == __main__. The filename has no indication of the directory structure, so Python doesn't know how to go up in the directory. For Python, test.py becomes the top-level script, there is nothing above it. That's why you cannot use relative import.
Possible Solutions
A) One way to solve this is to have a root file (in the root directory) which calls the modules/packages, like this:
root.py imports test.py. (entry point, __name__ == __main__).
test.py (relative) imports foo.py.
foo.py says the module has been imported.
The output is:
package.A.foo has been imported
Module's name is: package.test_A.test
B) If you want to execute the code as a module and not as a top-level script, you can try this from the command line:
python -m package.test_A.test
Any suggestions are welcomed.
You should also check: Relative imports for the billionth time , specially BrenBarn's answer.
from package.A import foo
I think it's clearer than
import sys
sys.path.append("..")
As the most popular answer suggests, basically its because your PYTHONPATH or sys.path includes . but not your path to your package. And the relative import is relative to your current working directory, not the file where the import happens; oddly.
You could fix this by first changing your relative import to absolute and then either starting it with:
PYTHONPATH=/path/to/package python -m test_A.test
OR forcing the python path when called this way, because:
With python -m test_A.test you're executing test_A/test.py with __name__ == '__main__' and __file__ == '/absolute/path/to/test_A/test.py'
That means that in test.py you could use your absolute import semi-protected in the main case condition and also do some one-time Python path manipulation:
from os import path
…
def main():
…
if __name__ == '__main__':
import sys
sys.path.append(path.join(path.dirname(__file__), '..'))
from A import foo
exit(main())
This is actually a lot simpler than what other answers make it out to be.
TL;DR: Import A directly instead of attempting a relative import.
The current working directory is not a package, unless you import the folder package from a different folder. So the behavior of your package will work fine if you intend it to be imported by other applications. What's not working is the tests...
Without changing anything in your directory structure, all that needs to be changed is how test.py imports foo.py.
from A import foo
Now running python -m test_A.test from the package directory will run without an ImportError.
Why does that work?
Your current working directory is not a package, but it is added to the path. Therefore you can import folder A and its contents directly. It is the same reason you can import any other package that you have installed... they're all included in your path.
Edit: 2020-05-08: Is seems the website I quoted is no longer controlled by the person who wrote the advice, so I'm removing the link to the site. Thanks for letting me know baxx.
If someone's still struggling a bit after the great answers already provided, I found advice on a website that no longer is available.
Essential quote from the site I mentioned:
"The same can be specified programmatically in this way:
import sys
sys.path.append('..')
Of course the code above must be written before the other import
statement.
It's pretty obvious that it has to be this way, thinking on it after the fact. I was trying to use the sys.path.append('..') in my tests, but ran into the issue posted by OP. By adding the import and sys.path defintion before my other imports, I was able to solve the problem.
Just remove .. in test.py
For me pytest works fine with that
Example:
from A import foo
if you have an __init__.py in an upper folder, you can initialize the import as
import file/path as alias in that init file. Then you can use it on lower scripts as:
import alias
In my case, I had to change to this:
Solution 1(more better which depend on current py file path. Easy to deploy)
Use pathlib.Path.parents make code cleaner
import sys
import os
import pathlib
target_path = pathlib.Path(os.path.abspath(__file__)).parents[3]
sys.path.append(target_path)
from utils import MultiFileAllowed
Solution 2
import sys
import os
sys.path.append(os.getcwd())
from utils import MultiFileAllowed
In my humble opinion, I understand this question in this way:
[CASE 1] When you start an absolute-import like
python -m test_A.test
or
import test_A.test
or
from test_A import test
you're actually setting the import-anchor to be test_A, in other word, top-level package is test_A . So, when we have test.py do from ..A import xxx, you are escaping from the anchor, and Python does not allow this.
[CASE 2] When you do
python -m package.test_A.test
or
from package.test_A import test
your anchor becomes package, so package/test_A/test.py doing from ..A import xxx does not escape the anchor(still inside package folder), and Python happily accepts this.
In short:
Absolute-import changes current anchor (=redefines what is the top-level package);
Relative-import does not change the anchor but confines to it.
Furthermore, we can use full-qualified module name(FQMN) to inspect this problem.
Check FQMN in each case:
[CASE2] test.__name__ = package.test_A.test
[CASE1] test.__name__ = test_A.test
So, for CASE2, an from .. import xxx will result in a new module with FQMN=package.xxx, which is acceptable.
While for CASE1, the .. from within from .. import xxx will jump out of the starting node(anchor) of test_A, and this is NOT allowed by Python.
[2022-07-19] I think this "relative-import" limitation is quite an ugly design, totally against (one of) Python's motto "Simple is better than complex".
Not sure in python 2.x but in python 3.6, assuming you are trying to run the whole suite, you just have to use -t
-t, --top-level-directory directory
Top level directory of project (defaults to start directory)
So, on a structure like
project_root
|
|----- my_module
| \
| \_____ my_class.py
|
\ tests
\___ test_my_func.py
One could for example use:
python3 unittest discover -s /full_path/project_root/tests -t /full_path/project_root/
And still import the my_module.my_class without major dramas.
Having
package/
__init__.py
A/
__init__.py
foo.py
test_A/
__init__.py
test.py
in A/__init__.py import foo:
from .foo import foo
when importing A/ from test_A/
import sys, os
sys.path.append(os.path.abspath('../A'))
# then import foo
import foo
I have two directories in my project:
project/
src/
scripts/
"src" contains my polished code, and "scripts" contains one-off Python scripts.
I would like all the scripts to have "../src" added to their sys.path, so that they can access the modules under the "src" tree. One way to do this is to write a scripts/__init__.py file, with the contents:
scripts/__init__.py:
import sys
sys.path.append("../src")
This works, but has the unwanted side-effect of putting all of my scripts in a package called "scripts". Is there some other way to get all my scripts to automatically call the above initialization code?
I could just edit the PYTHONPATH environment variable in my .bashrc, but I want my scripts to work out-of-the-box, without requiring the user to fiddle with PYTHONPATH. Also, I don't like having to make account-wide changes just to accommodate this one project.
Even if you have other plans for distribution, it might be worth putting together a basic setup.py in your src folder. That way, you can run setup.py develop to have distutils put a link to your code onto your default path (meaning any changes you make will be reflected in-place without having to "reinstall", and all modules will "just work," no matter where your scripts are). It'd be a one-time step, but that's still one more step than zero, so it depends on whether that's more trouble than updating .bashrc. If you use pip, the equivalent would be pip install -e /path/to/src.
The more-robust solution--especially if you're going to be mirroring/versioning these scripts on several developers' machines--is to do your development work inside a controlled virtual environment. It turns out virtualenv even has built-in support for making your own bootstrap customizations. It seems like you'd just need an after_install() hook to either tweak sitecustomize, run pip install -e, or add a plain .pth file to site-packages. The custom bootstrap could live in your source control along with the other scripts, and would need to be run once for each developer's setup. You'd also have the normal benefits of using virtualenv (explicit dependency versioning, isolation from system-wide configuration, and standardization between disparate machines, to name a few).
If you really don't want to have any setup steps whatsoever and are willing to only run these scripts from inside the 'project' directory, then you could plop in an __init__.py as such:
project/
src/
some_module.py
scripts/
__init__.py # special "magic"
some_script.py
And these are what your files could look like:
# file: project/src/some_module.py
print("importing %r" % __name__)
def some_function():
print("called some_function() inside %s" % __name__)
--------------------------------------------------------
# file: project/scripts/some_script.py
import some_module
if __name__ == '__main__':
some_module.some_function()
--------------------------------------------------------
# file: project/scripts/__init__.py
import sys
from os.path import dirname, abspath, join
print("doing magic!")
sys.path.insert(0, join(dirname(dirname(abspath(__file__))), 'src'))
Then you'd have to run your scripts like so:
[~/project] $ python -m scripts.some_script
doing magic!
importing 'some_module'
called some_function() inside some_module
Beware! The scripts can only be called like this from inside project/:
[~/otherdir] $ python -m scripts.some_script
ImportError: no module named scripts
To enable that, you're back to editing .bashrc, or using one of the options above. The last option should really be a last resort; as #Simon said, you're really fighting the language at that point.
If you want your scripts to be runnable (I assume from the command line), they have to be on the path somewhere.
Something sounds odd about what you're trying to do though. Can you show us an example of exactly what you're trying to accomplish?
You can add a file called 'pathHack.py' in the project dir and put something like this into it:
import os
import sys
pkgDir = os.path.dirname(__file__)
sys.path.insert(os.path.join(pkgDir, 'scripts')
Then, in a python file in your project dir, start by:
import pathHack
And now you can import stuff from the scripts dir without the 'scripts.' prefix. If you have only one file in this directory, and you don't care about hiding this kind of thing, you may inline this snippet.