I want to ask you something that came to my mind doing some stuff.
I have the following structure:
src
- __init__.py
- class1.py
+ folder2
- __init__.py
- class2.py
I class2.py I want to import class1 to use it. Obviously, I cannot use
from src.class1 import Class1
cause it will produce an error. A workaround that works to me is to define the following in the __init__.py inside folder2:
import sys
sys.path.append('src')
My question is if this option is valid and a good idea to use or maybe there are better solutions.
Another question. Imagine that the project structure is:
src
- __init__.py
- class1.py
+ folder2
- __init__.py
- class2.py
+ errorsFolder
- __init__.py
- errors.py
In class1:
from errorsFolder.errors import Errors
this works fine. But if I try to do in class2 which is at the same level than errorsFolder:
from src.errorsFolder.errors import Errors
It fails (ImportError: No module named src.errorsFolder.errors)
Thank you in advance!
Despite the fact that it's slightly shocking to have to import a "parent" module in your package, your workaround depends on the current directory you're running your application, which is bad.
import sys
sys.path.append('src')
should be
import sys,os
sys.path.append(os.path.join(os.path.dirname(os.path.abspath(__file__)),os.pardir))
to add the parent directory of the directory of your current module, regardless of the current directory you're running your application from (your module may be imported by several applications, which don't all require to be run in the same directory)
One correct way to solve this is to set the environment variable PYTHONPATH to the path which contains src. Then import src.class1 will always work, regardless of which directory you start in.
from ..class1 import Class1 should work (at least it does here in a similar layout, using python 2.7.x).
As a general rule: messing with sys.path is usually a very bad idea, specially if this allows a same module to be imported from two different paths (which would be the case with your files layout).
Also, you may want to think twice about your current layout. Python is not Java and doesn't require (nor even encourage) a "one class per module" approach. If both classes need to work together, they might be better in a same module, or at least in modules at the same level in the package tree (note that you can use the top-level package's __init__ as a facade to provide direct access to objects defined in submodules / subpackages). NB : I'm not saying that your current layout is necessarily wrong, just that it might not be the simplest one.
There is also a solution that does not involve the use of the environment variable PYTHONPATH.
In src/ create setup.py with these contents:
from setuptools import setup
setup()
Now you can do pip install -e /path/to/src and import to your hearts content.
-e, --editable <path/url> Install a project in editable mode (i.e. setuptools "develop mode") from a local project path or a VCS url.
No, Its not good. Python takes modules in two ways:
Python looks for its modules and packages in $PYTHONPATH.
Refer: https://docs.python.org/2/using/cmdline.html#envvar-PYTHONPATH
To find out what is included in $PYTHONPATH, run the following code in python (3):
import sys
print(sys.path)
all folder containing init.py are marked as python package.(if they are subdirectories under PYTHONPATH)
By help of these two ways you can full-fill the need to create a python project.
Related
I am an experienced java enterprise developer but very new to python enterprise development shop. I am currently, struggling to understand why some imports work while others don't.
Some background: Our dev team recently upgraded python from 3.6 to 3.10.5 and following is our package structure
src/
bunch of files (dockerfile, Pipfile, requrirements.txt, shell scripts, etc)
package/
__init__.py
moduleA.py
subpackage1/
__init__.py
moduleX.py
moduleY.py
subpackage2/
__init__.py
moduleZ.py
tests/
__init__.py
test1.py
Now, inside the moduleA.py, I am trying to import subpackage2/moduleZ.py like so
from .subpackage2 import moduleZ
But, I get the error saying
ImportError: attempted relative import with no known parent package
The funny thing is that if I move moduleA.py out of package/ and into src/ then it is able to find everything. I am not sure why is this the case.
I run the moduleA.py by executiong python package/moduleA.py.
Now, I read that maybe there is a problem becasue you have you give a -m parameter if running a module as a script (or something on those lines). But, if I do that, I get the following error:
ModuleNotFoundError: No module names 'package/moduleA.py'
I even try running package1/moduleA and remove the .py, but that does not work either. I can understand why as I technically never installed it ?
All of this happened because apparently, the tests broke and to make it work they added relative imports. They changed the import from "from subpackage2 import moduleZ" to "from .subpackage2 import moduleZ" and the tests started working, but the app started failing.
Any understanding I can get would be much appreciated.
The -m parameter is used with the import name, not the path. So you'd use python3 -m package.moduleA (with . instead of /, and no .py), not python3 -m package/moduleA.py.
That said, it only works if package.moduleA is locatable from one of the roots in sys.path. Shy of installing the package, the simplest way to make it work is to ensure your working directory is src (so package exists in the working directory):
$ cd path/to/src
$ python3 -m package.moduleA
and, with your existing setup, if moduleA.py includes a from .subpackage2 import moduleZ, the import should work; Python knows package.moduleA is a module within package, so it can use a relative import to look for a sibling package to moduleA named subpackage2, and then inside it it can find moduleZ.
Obviously, this is brittle (it only works if you cd to the src root directory before running Python, or hack the path to src in PYTHONPATH, which is terrible hack if the code ever has to be run by anyone else); ideally you make this an installable package, install it (in global site-packages, user site-packages, or within a virtual environment created with the built-in venv module or the third-party virtualenv module), and then your working directory no longer matters (since the site-packages will be part of your sys.path automatically). For simple testing, as long as the working directory is correct (not sure what it was for you), and you use -m correctly (you were using it incorrectly), relative imports will work, but it's not the long term solution.
So first of all - the root importing directory is the directory from which you're running the main script.
This directory by default is the root for all imports from all scripts.
So if you're executing script from directory src you can do such imports:
from package.moduleA import *
from package.subpackage1.moduleX import *
But now in files moduleA and moduleX you need to make imports based on root folder. If you want to import something from module moduleY inside moduleX you need to do:
# this is inside moduleX
from package.subpackage1.moduleY import *
This is because python is looking for modules in specific locations.
First location is your root directory - directory from which you execute your main script.
Second location is directory with modules installed by PIP.
You can check all directories using following:
import sys
for p in sys.path:
print(p)
Now to solve your problem there are couple solutions.
The fast one but IMHO not the best one is to add all paths with submodules to sys.path - list variable with all directories where python is looking for modules.
new_path = "/path/to/application/app/folder/src/package/subpackage1"
if new_path not in sys.path:
sys.path.append(new_path)
Another solution is to use full path for imports in all package modules:
from package.subpackage1.moduleX import *
I think in your case it will be the correct solution.
You can also combine 2 solutions.
First add folders with subpackages to sys.path and use subpackage folders as a root folders for imports. But it's good solution only if you have complex submodule structure. And it's not the best solution if in future you will need to deploy your package as a wheel or share between multiple projects.
It seems there are already quite some questions here about relative import in python 3, but after going through many of them I still didn't find the answer for my issue.
so here is the question.
I have a package shown below
package/
__init__.py
A/
__init__.py
foo.py
test_A/
__init__.py
test.py
and I have a single line in test.py:
from ..A import foo
now, I am in the folder of package, and I run
python -m test_A.test
I got message
"ValueError: attempted relative import beyond top-level package"
but if I am in the parent folder of package, e.g., I run:
cd ..
python -m package.test_A.test
everything is fine.
Now my question is:
when I am in the folder of package, and I run the module inside the test_A sub-package as test_A.test, based on my understanding, ..A goes up only one level, which is still within the package folder, why it gives message saying beyond top-level package. What is exactly the reason that causes this error message?
EDIT: There are better/more coherent answers to this question in other questions:
Sibling package imports
Relative imports for the billionth time
Why doesn't it work? It's because python doesn't record where a package was loaded from. So when you do python -m test_A.test, it basically just discards the knowledge that test_A.test is actually stored in package (i.e. package is not considered a package). Attempting from ..A import foo is trying to access information it doesn't have any more (i.e. sibling directories of a loaded location). It's conceptually similar to allowing from ..os import path in a file in math. This would be bad because you want the packages to be distinct. If they need to use something from another package, then they should refer to them globally with from os import path and let python work out where that is with $PATH and $PYTHONPATH.
When you use python -m package.test_A.test, then using from ..A import foo resolves just fine because it kept track of what's in package and you're just accessing a child directory of a loaded location.
Why doesn't python consider the current working directory to be a package? NO CLUE, but gosh it would be useful.
import sys
sys.path.append("..") # Adds higher directory to python modules path.
Try this.
Worked for me.
Assumption:
If you are in the package directory, A and test_A are separate packages.
Conclusion:
..A imports are only allowed within a package.
Further notes:
Making the relative imports only available within packages is useful if you want to force that packages can be placed on any path located on sys.path.
EDIT:
Am I the only one who thinks that this is insane!? Why in the world is the current working directory not considered to be a package? – Multihunter
The current working directory is usually located in sys.path. So, all files there are importable. This is behavior since Python 2 when packages did not yet exist. Making the running directory a package would allow imports of modules as "import .A" and as "import A" which then would be two different modules. Maybe this is an inconsistency to consider.
None of these solutions worked for me in 3.6, with a folder structure like:
package1/
subpackage1/
module1.py
package2/
subpackage2/
module2.py
My goal was to import from module1 into module2. What finally worked for me was, oddly enough:
import sys
sys.path.append(".")
Note the single dot as opposed to the two-dot solutions mentioned so far.
Edit: The following helped clarify this for me:
import os
print (os.getcwd())
In my case, the working directory was (unexpectedly) the root of the project.
This is very tricky in Python.
I'll first comment on why you're having that problem and then I will mention two possible solutions.
What's going on?
You must take this paragraph from the Python documentation into consideration:
Note that relative imports are based on the name of the current
module. Since the name of the main module is always "main",
modules intended for use as the main module of a Python application
must always use absolute imports.
And also the following from PEP 328:
Relative imports use a module's name attribute to determine that
module's position in the package hierarchy. If the module's name does
not contain any package information (e.g. it is set to 'main')
then relative imports are resolved as if the module were a top level
module, regardless of where the module is actually located on the file
system.
Relative imports work from the filename (__name__ attribute), which can take two values:
It's the filename, preceded by the folder strucutre, separated by dots.
For eg: package.test_A.test
Here Python knows the parent directories: before test comes test_A and then package.
So you can use the dot notation for relative import.
# package.test_A/test.py
from ..A import foo
You can then have like a root file in the root directory which calls test.py:
# root.py
from package.test_A import test
When you run the module (test.py) directly, it becomes the entry point to the program , so __name__ == __main__. The filename has no indication of the directory structure, so Python doesn't know how to go up in the directory. For Python, test.py becomes the top-level script, there is nothing above it. That's why you cannot use relative import.
Possible Solutions
A) One way to solve this is to have a root file (in the root directory) which calls the modules/packages, like this:
root.py imports test.py. (entry point, __name__ == __main__).
test.py (relative) imports foo.py.
foo.py says the module has been imported.
The output is:
package.A.foo has been imported
Module's name is: package.test_A.test
B) If you want to execute the code as a module and not as a top-level script, you can try this from the command line:
python -m package.test_A.test
Any suggestions are welcomed.
You should also check: Relative imports for the billionth time , specially BrenBarn's answer.
from package.A import foo
I think it's clearer than
import sys
sys.path.append("..")
As the most popular answer suggests, basically its because your PYTHONPATH or sys.path includes . but not your path to your package. And the relative import is relative to your current working directory, not the file where the import happens; oddly.
You could fix this by first changing your relative import to absolute and then either starting it with:
PYTHONPATH=/path/to/package python -m test_A.test
OR forcing the python path when called this way, because:
With python -m test_A.test you're executing test_A/test.py with __name__ == '__main__' and __file__ == '/absolute/path/to/test_A/test.py'
That means that in test.py you could use your absolute import semi-protected in the main case condition and also do some one-time Python path manipulation:
from os import path
…
def main():
…
if __name__ == '__main__':
import sys
sys.path.append(path.join(path.dirname(__file__), '..'))
from A import foo
exit(main())
This is actually a lot simpler than what other answers make it out to be.
TL;DR: Import A directly instead of attempting a relative import.
The current working directory is not a package, unless you import the folder package from a different folder. So the behavior of your package will work fine if you intend it to be imported by other applications. What's not working is the tests...
Without changing anything in your directory structure, all that needs to be changed is how test.py imports foo.py.
from A import foo
Now running python -m test_A.test from the package directory will run without an ImportError.
Why does that work?
Your current working directory is not a package, but it is added to the path. Therefore you can import folder A and its contents directly. It is the same reason you can import any other package that you have installed... they're all included in your path.
Edit: 2020-05-08: Is seems the website I quoted is no longer controlled by the person who wrote the advice, so I'm removing the link to the site. Thanks for letting me know baxx.
If someone's still struggling a bit after the great answers already provided, I found advice on a website that no longer is available.
Essential quote from the site I mentioned:
"The same can be specified programmatically in this way:
import sys
sys.path.append('..')
Of course the code above must be written before the other import
statement.
It's pretty obvious that it has to be this way, thinking on it after the fact. I was trying to use the sys.path.append('..') in my tests, but ran into the issue posted by OP. By adding the import and sys.path defintion before my other imports, I was able to solve the problem.
Just remove .. in test.py
For me pytest works fine with that
Example:
from A import foo
if you have an __init__.py in an upper folder, you can initialize the import as
import file/path as alias in that init file. Then you can use it on lower scripts as:
import alias
In my case, I had to change to this:
Solution 1(more better which depend on current py file path. Easy to deploy)
Use pathlib.Path.parents make code cleaner
import sys
import os
import pathlib
target_path = pathlib.Path(os.path.abspath(__file__)).parents[3]
sys.path.append(target_path)
from utils import MultiFileAllowed
Solution 2
import sys
import os
sys.path.append(os.getcwd())
from utils import MultiFileAllowed
In my humble opinion, I understand this question in this way:
[CASE 1] When you start an absolute-import like
python -m test_A.test
or
import test_A.test
or
from test_A import test
you're actually setting the import-anchor to be test_A, in other word, top-level package is test_A . So, when we have test.py do from ..A import xxx, you are escaping from the anchor, and Python does not allow this.
[CASE 2] When you do
python -m package.test_A.test
or
from package.test_A import test
your anchor becomes package, so package/test_A/test.py doing from ..A import xxx does not escape the anchor(still inside package folder), and Python happily accepts this.
In short:
Absolute-import changes current anchor (=redefines what is the top-level package);
Relative-import does not change the anchor but confines to it.
Furthermore, we can use full-qualified module name(FQMN) to inspect this problem.
Check FQMN in each case:
[CASE2] test.__name__ = package.test_A.test
[CASE1] test.__name__ = test_A.test
So, for CASE2, an from .. import xxx will result in a new module with FQMN=package.xxx, which is acceptable.
While for CASE1, the .. from within from .. import xxx will jump out of the starting node(anchor) of test_A, and this is NOT allowed by Python.
[2022-07-19] I think this "relative-import" limitation is quite an ugly design, totally against (one of) Python's motto "Simple is better than complex".
Not sure in python 2.x but in python 3.6, assuming you are trying to run the whole suite, you just have to use -t
-t, --top-level-directory directory
Top level directory of project (defaults to start directory)
So, on a structure like
project_root
|
|----- my_module
| \
| \_____ my_class.py
|
\ tests
\___ test_my_func.py
One could for example use:
python3 unittest discover -s /full_path/project_root/tests -t /full_path/project_root/
And still import the my_module.my_class without major dramas.
Having
package/
__init__.py
A/
__init__.py
foo.py
test_A/
__init__.py
test.py
in A/__init__.py import foo:
from .foo import foo
when importing A/ from test_A/
import sys, os
sys.path.append(os.path.abspath('../A'))
# then import foo
import foo
I have a project with the following file structure:
root/
run.py
bot/
__init__.py
my_discord_bot.py
dice/
__init__.py
dice.py
# dice files
help/
__init__.py
help.py
# help files
parser/
__init__.py
parser.py
# other parser files
The program is run from within the root directory by calling python run.py. run.py imports bot.my_discord_bot and then makes use of a class defined there.
The file bot/my_discord_bot.py has the following import statements:
import dice.dice as d
import help.help as h
import parser.parser as p
On Linux, all three import statements execute correctly. On Windows, the first two seem to execute fine, but then on the third I'm told:
ImportError: No module named 'parser.parser'; 'parser' is not a package
Why does it break on the third import statement, and why does it only break on Windows?
Edit: clarifies how the program is run
Make sure that your parser is not shadowing a built-in or third-party package/module/library.
I am not 100% sure about the specifics of how this name conflict would be resolved, but it seems like you can potentially a). have your module overridden by the existing module (which seems like it might be happening in your Windows case), or b). override the existing module, which could cause bugs down the road. It seems like b is what commonly trips people up.
If you think this might be happening with one of your modules (which seems fairly likely with a name like parser), try renaming your module.
See this very nice article for more details and more common Python "import traps".
Put run.py outside root folder, so you'll have run.py next to root folder, then create __init__.py inside root folder, and change imports to:
import root.parser.parser as p
Or just rename your parser module.
Anyway you should be careful with naming, because you can simply mess your own stuff someday.
I am writing a Python (Python3) application (not a package) and have some doubts about the correct directory structure. At the moment I have this:
myapp/
__init__.py
launch.py
core/
__init__.py
some_core_class.py
other_core_class.py
gui/
__init__.py
some_gui_class.py
other_gui_class.py
I want the application to be started with python launch.py from any place in my directory structure - of course with prepending the correct path to launch.py, e.g. python myapps/myapp/launch.py.
Inside my modules I use absolute imports, e.g. in some_core_class.py I write from myapp.core.other_core_class import OtherCoreClass. I use the same way in launch.py, e.g. from myapp.core.some_core_class import SomeCoreClass.
But then launching it for example directly from dir myapp by writing python launch.py results in ImportError: No module named 'myapp'. I found I could make it work by changing my import in launch.py to from core.some_core_class import SomeCoreClass but this does not seem to me as a correct absolute import and is inconsistent with imports in other files.
What is the best way to solve my issue? I would like to avoid adding myapp to PATH environment variable which would require manual edit by the user or an installer. How should I change my code or my imports to make the application launchable from anywhere? Is that even possible?
I have just found, I can solve it by prepending
import os
import sys
app_parent_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
sys.path.append(app_parent_dir)
at the very beginning of my launch.py. But this is so ugly and feels like a hack. There must be a better way. Python should not be ugly. Awaiting for better answers...
It seems there are already quite some questions here about relative import in python 3, but after going through many of them I still didn't find the answer for my issue.
so here is the question.
I have a package shown below
package/
__init__.py
A/
__init__.py
foo.py
test_A/
__init__.py
test.py
and I have a single line in test.py:
from ..A import foo
now, I am in the folder of package, and I run
python -m test_A.test
I got message
"ValueError: attempted relative import beyond top-level package"
but if I am in the parent folder of package, e.g., I run:
cd ..
python -m package.test_A.test
everything is fine.
Now my question is:
when I am in the folder of package, and I run the module inside the test_A sub-package as test_A.test, based on my understanding, ..A goes up only one level, which is still within the package folder, why it gives message saying beyond top-level package. What is exactly the reason that causes this error message?
EDIT: There are better/more coherent answers to this question in other questions:
Sibling package imports
Relative imports for the billionth time
Why doesn't it work? It's because python doesn't record where a package was loaded from. So when you do python -m test_A.test, it basically just discards the knowledge that test_A.test is actually stored in package (i.e. package is not considered a package). Attempting from ..A import foo is trying to access information it doesn't have any more (i.e. sibling directories of a loaded location). It's conceptually similar to allowing from ..os import path in a file in math. This would be bad because you want the packages to be distinct. If they need to use something from another package, then they should refer to them globally with from os import path and let python work out where that is with $PATH and $PYTHONPATH.
When you use python -m package.test_A.test, then using from ..A import foo resolves just fine because it kept track of what's in package and you're just accessing a child directory of a loaded location.
Why doesn't python consider the current working directory to be a package? NO CLUE, but gosh it would be useful.
import sys
sys.path.append("..") # Adds higher directory to python modules path.
Try this.
Worked for me.
Assumption:
If you are in the package directory, A and test_A are separate packages.
Conclusion:
..A imports are only allowed within a package.
Further notes:
Making the relative imports only available within packages is useful if you want to force that packages can be placed on any path located on sys.path.
EDIT:
Am I the only one who thinks that this is insane!? Why in the world is the current working directory not considered to be a package? – Multihunter
The current working directory is usually located in sys.path. So, all files there are importable. This is behavior since Python 2 when packages did not yet exist. Making the running directory a package would allow imports of modules as "import .A" and as "import A" which then would be two different modules. Maybe this is an inconsistency to consider.
None of these solutions worked for me in 3.6, with a folder structure like:
package1/
subpackage1/
module1.py
package2/
subpackage2/
module2.py
My goal was to import from module1 into module2. What finally worked for me was, oddly enough:
import sys
sys.path.append(".")
Note the single dot as opposed to the two-dot solutions mentioned so far.
Edit: The following helped clarify this for me:
import os
print (os.getcwd())
In my case, the working directory was (unexpectedly) the root of the project.
This is very tricky in Python.
I'll first comment on why you're having that problem and then I will mention two possible solutions.
What's going on?
You must take this paragraph from the Python documentation into consideration:
Note that relative imports are based on the name of the current
module. Since the name of the main module is always "main",
modules intended for use as the main module of a Python application
must always use absolute imports.
And also the following from PEP 328:
Relative imports use a module's name attribute to determine that
module's position in the package hierarchy. If the module's name does
not contain any package information (e.g. it is set to 'main')
then relative imports are resolved as if the module were a top level
module, regardless of where the module is actually located on the file
system.
Relative imports work from the filename (__name__ attribute), which can take two values:
It's the filename, preceded by the folder strucutre, separated by dots.
For eg: package.test_A.test
Here Python knows the parent directories: before test comes test_A and then package.
So you can use the dot notation for relative import.
# package.test_A/test.py
from ..A import foo
You can then have like a root file in the root directory which calls test.py:
# root.py
from package.test_A import test
When you run the module (test.py) directly, it becomes the entry point to the program , so __name__ == __main__. The filename has no indication of the directory structure, so Python doesn't know how to go up in the directory. For Python, test.py becomes the top-level script, there is nothing above it. That's why you cannot use relative import.
Possible Solutions
A) One way to solve this is to have a root file (in the root directory) which calls the modules/packages, like this:
root.py imports test.py. (entry point, __name__ == __main__).
test.py (relative) imports foo.py.
foo.py says the module has been imported.
The output is:
package.A.foo has been imported
Module's name is: package.test_A.test
B) If you want to execute the code as a module and not as a top-level script, you can try this from the command line:
python -m package.test_A.test
Any suggestions are welcomed.
You should also check: Relative imports for the billionth time , specially BrenBarn's answer.
from package.A import foo
I think it's clearer than
import sys
sys.path.append("..")
As the most popular answer suggests, basically its because your PYTHONPATH or sys.path includes . but not your path to your package. And the relative import is relative to your current working directory, not the file where the import happens; oddly.
You could fix this by first changing your relative import to absolute and then either starting it with:
PYTHONPATH=/path/to/package python -m test_A.test
OR forcing the python path when called this way, because:
With python -m test_A.test you're executing test_A/test.py with __name__ == '__main__' and __file__ == '/absolute/path/to/test_A/test.py'
That means that in test.py you could use your absolute import semi-protected in the main case condition and also do some one-time Python path manipulation:
from os import path
…
def main():
…
if __name__ == '__main__':
import sys
sys.path.append(path.join(path.dirname(__file__), '..'))
from A import foo
exit(main())
This is actually a lot simpler than what other answers make it out to be.
TL;DR: Import A directly instead of attempting a relative import.
The current working directory is not a package, unless you import the folder package from a different folder. So the behavior of your package will work fine if you intend it to be imported by other applications. What's not working is the tests...
Without changing anything in your directory structure, all that needs to be changed is how test.py imports foo.py.
from A import foo
Now running python -m test_A.test from the package directory will run without an ImportError.
Why does that work?
Your current working directory is not a package, but it is added to the path. Therefore you can import folder A and its contents directly. It is the same reason you can import any other package that you have installed... they're all included in your path.
Edit: 2020-05-08: Is seems the website I quoted is no longer controlled by the person who wrote the advice, so I'm removing the link to the site. Thanks for letting me know baxx.
If someone's still struggling a bit after the great answers already provided, I found advice on a website that no longer is available.
Essential quote from the site I mentioned:
"The same can be specified programmatically in this way:
import sys
sys.path.append('..')
Of course the code above must be written before the other import
statement.
It's pretty obvious that it has to be this way, thinking on it after the fact. I was trying to use the sys.path.append('..') in my tests, but ran into the issue posted by OP. By adding the import and sys.path defintion before my other imports, I was able to solve the problem.
Just remove .. in test.py
For me pytest works fine with that
Example:
from A import foo
if you have an __init__.py in an upper folder, you can initialize the import as
import file/path as alias in that init file. Then you can use it on lower scripts as:
import alias
In my case, I had to change to this:
Solution 1(more better which depend on current py file path. Easy to deploy)
Use pathlib.Path.parents make code cleaner
import sys
import os
import pathlib
target_path = pathlib.Path(os.path.abspath(__file__)).parents[3]
sys.path.append(target_path)
from utils import MultiFileAllowed
Solution 2
import sys
import os
sys.path.append(os.getcwd())
from utils import MultiFileAllowed
In my humble opinion, I understand this question in this way:
[CASE 1] When you start an absolute-import like
python -m test_A.test
or
import test_A.test
or
from test_A import test
you're actually setting the import-anchor to be test_A, in other word, top-level package is test_A . So, when we have test.py do from ..A import xxx, you are escaping from the anchor, and Python does not allow this.
[CASE 2] When you do
python -m package.test_A.test
or
from package.test_A import test
your anchor becomes package, so package/test_A/test.py doing from ..A import xxx does not escape the anchor(still inside package folder), and Python happily accepts this.
In short:
Absolute-import changes current anchor (=redefines what is the top-level package);
Relative-import does not change the anchor but confines to it.
Furthermore, we can use full-qualified module name(FQMN) to inspect this problem.
Check FQMN in each case:
[CASE2] test.__name__ = package.test_A.test
[CASE1] test.__name__ = test_A.test
So, for CASE2, an from .. import xxx will result in a new module with FQMN=package.xxx, which is acceptable.
While for CASE1, the .. from within from .. import xxx will jump out of the starting node(anchor) of test_A, and this is NOT allowed by Python.
[2022-07-19] I think this "relative-import" limitation is quite an ugly design, totally against (one of) Python's motto "Simple is better than complex".
Not sure in python 2.x but in python 3.6, assuming you are trying to run the whole suite, you just have to use -t
-t, --top-level-directory directory
Top level directory of project (defaults to start directory)
So, on a structure like
project_root
|
|----- my_module
| \
| \_____ my_class.py
|
\ tests
\___ test_my_func.py
One could for example use:
python3 unittest discover -s /full_path/project_root/tests -t /full_path/project_root/
And still import the my_module.my_class without major dramas.
Having
package/
__init__.py
A/
__init__.py
foo.py
test_A/
__init__.py
test.py
in A/__init__.py import foo:
from .foo import foo
when importing A/ from test_A/
import sys, os
sys.path.append(os.path.abspath('../A'))
# then import foo
import foo