/root
|- __init__.py
|
|- /src
| |
| |- __init__.py
| |- /x
| |
| |- __init__.py
| |- xy.py
|
|- /tests
| |- __init__.py
| |- /test_x
| |
| |- __init__.py
| |- test_xy.py
# tests/test_x/test_xy.py
from src.x.xy import XY
Class TestXY:
# etc
When I’m in root I try to run pytest tests/*/* and I get an error that due to from src.x.xy import XY because src can’t be found. If I change the import to from …src.x.xy import XY I get “cannot import” because it’s from a directory one level above.
I also tried running Python -m pytest tests/*/* but I get an error about conftest not being found in __pycache__ which I don’t understand. (ERROR: not found: /root/tests/__pycache__/conftest.cpython-310-pytest-7.1.2.pyc (no name '/root/tests/__pycache__/conftest.cpython-310-pytest-7.1.2.pyc' in any of []))
What am I missing? Why is it so hard to run tests this way? I can run them individually in pycharm by clicking the little green arrows in the test script no problem.
In that architecture of project you should use config file. Config file should have path to src.
pyproject.toml example:
[tool.pytest.ini_options]
pythonpath = [
"src"
]
pytest.ini example:
[pytest]
pythonpath = src
If you will have multiple src directories you also can add it to config
For example pyproject.toml:
[tool.pytest.ini_options]
pythonpath = [
"src", "src2",
]
Related
I have the following project structure:
Project |
|- sim |
| |- out |
| |- main.py
|- libs |
|- __init__.py
|- plus.py
Inside plus.py there is a function called sum(a,b) that returns the sum of 2 numbers. I'm trying to import the module plus.py into main.py to the able to call this function, however I'm getting the following error: ImportError: attempted relative import with no known parent package.
Here is the code inside main.py:
from ...libs import plus
a = 1
b = 5
c = plus.sum(a,b)
print(c)
One of the solutions I found is to add the project directory to path, but I'm trying to avoid that.
I'm using VSCode to call python, this could be also a useful information.
What I'm doing wrong here?
Thanks in advance.
EDIT:
Added __init__.py files in sim, out and Project directories as #ThePjot suggested and the error remains. Now the project structure is in the following form:
Project |
|- __init__.py
|
|- sim |
| |- __init__.py
| |- out |
| |- __init__.py
| |- main.py
|- libs |
|- __init__.py
|- plus.py
The __init__.py files are empty.
I've had similar issues and I've created an experimental new import library ultraimport that allows to do file system based imports to solve your issue.
In your main.py you would then write:
import ultraimport
plus = ultraimport('__dir__/../../libs/plus.py', 'plus')
a = 1
b = 5
c = plus.sum(a,b)
print(c)
PS: With ultraimport, it's also not necessary to create those __init__.py files. You could remove them again and it will still work.
The project I am working on has the following strucutre:
|- setup.py
|- package/
| |- __init__.py
| |- file_a.py
| |- file_b.py
| |- submodule/
| | |- __init__.py
| | |- submodule_part_a/
| | | |- file_c.py
| | |- submodule_part_b/
| | | |- file_d.py
When I run from package.submodule.submodule_part_a import file_c I receive the following Error: ImportError: No module named 'package.submodule'
After some research it seems that this is caused by either the packages or package_dir. The setuptools setup method in the setup.py file looks similar to this:
setup(..., packages=['package',], package_dir={'package':'package'}, ...)
I've confirmed that once the above package is installed there is no submodule folder inside the installed directory.
What needs to be done in order to correctly install submodule?
given the following file structure (example)
library_project\
|- __init__.py
|
|--- utils_a\
| |- __init__.py
| |- util_functions_a.py
|
|--- utils_b\
| |- __init__.py
| |
| |--- utils_b_1\
| | |- __init__.py
| | |- util_function_b1.py
| |
| |--- utils_b_2\
| | |- __init__.py
| | |- util_function_b2.py
and a second project
other_project\
|- __init__.py
|- run.py
in run.py
from library_project.utils_b.util_function_b2 import do_something
do_something()
How can util_function_b2.py use functions from util_functions_a.py?
All examples for relative imports that I found assume that the imported package is a sibling package (e.g. https://docs.python.org/2/tutorial/modules.html#intra-package-references) and not 2 levels up
The import statement allows you to use an arbitrary number of . dots to refer to packages further up the tree.
. refers to the same package; utils_b_2.
.. refers to the same shared parent package, utils_b
... refers to library_project
From util_function_b2 you can refer to util_functions_a with 3 dots:
from ...utils_a.util_functions_a import somename
Given the Directory structure, to import "util_function_b2.py"
from library_project.urils_b.utils_b_2.until_function_b2 import *
How can util_function_b2.py use functions from util_functions_a.py -
You can either add the project to python path.
sys.path.append('/path/to/library_project/')
then,
from from library_project.utils_a imoprt *
or there is a way to do with relative addressing using "."
I use setuptools to distribute my python package. Now I need to distribute additional datafiles.
From what I've gathered fromt the setuptools documentation, I need to have my data files inside the package directory. However, I would rather have my datafiles inside a subdirectory in the root directory.
What I would like to avoid:
/ #root
|- src/
| |- mypackage/
| | |- data/
| | | |- resource1
| | | |- [...]
| | |- __init__.py
| | |- [...]
|- setup.py
What I would like to have instead:
/ #root
|- data/
| |- resource1
| |- [...]
|- src/
| |- mypackage/
| | |- __init__.py
| | |- [...]
|- setup.py
I just don't feel comfortable with having so many subdirectories, if it's not essential. I fail to find a reason, why I /have/ to put the files inside the package directory. It is also cumbersome to work with so many nested subdirectories IMHO. Or is there any good reason that would justify this restriction?
Option 1: Install as package data
The main advantage of placing data files inside the root of your Python package
is that it lets you avoid worrying about where the files will live on a user's
system, which may be Windows, Mac, Linux, some mobile platform, or inside an Egg. You can
always find the directory data relative to your Python package root, no matter where or how it is installed.
For example, if I have a project layout like so:
project/
foo/
__init__.py
data/
resource1/
foo.txt
You can add a function to __init__.py to locate an absolute path to a data
file:
import os
_ROOT = os.path.abspath(os.path.dirname(__file__))
def get_data(path):
return os.path.join(_ROOT, 'data', path)
print get_data('resource1/foo.txt')
Outputs:
/Users/pat/project/foo/data/resource1/foo.txt
After the project is installed as an Egg the path to data will change, but the code doesn't need to change:
/Users/pat/virtenv/foo/lib/python2.6/site-packages/foo-0.0.0-py2.6.egg/foo/data/resource1/foo.txt
Option 2: Install to fixed location
The alternative would be to place your data outside the Python package and then
either:
Have the location of data passed in via a configuration file,
command line arguments or
Embed the location into your Python code.
This is far less desirable if you plan to distribute your project. If you really want to do this, you can install your data wherever you like on the target system by specifying the destination for each group of files by passing in a list of tuples:
from setuptools import setup
setup(
...
data_files=[
('/var/data1', ['data/foo.txt']),
('/var/data2', ['data/bar.txt'])
]
)
Updated: Example of a shell function to recursively grep Python files:
atlas% function grep_py { find . -name '*.py' -exec grep -Hn $* {} \; }
atlas% grep_py ": \["
./setup.py:9: package_data={'foo': ['data/resource1/foo.txt']}
I Think I found a good compromise which will allow you to mantain the following structure:
/ #root
|- data/
| |- resource1
| |- [...]
|- src/
| |- mypackage/
| | |- __init__.py
| | |- [...]
|- setup.py
You should install data as package_data, to avoid the problems described in samplebias answer, but in order to mantain the file structure you should add to your setup.py:
try:
os.symlink('../../data', 'src/mypackage/data')
setup(
...
package_data = {'mypackage': ['data/*']}
...
)
finally:
os.unlink('src/mypackage/data')
This way we create the appropriate structure "just in time", and mantain our source tree organized.
To access such data files within your code, you 'simply' use:
data = resource_filename(Requirement.parse("main_package"), 'mypackage/data')
I still don't like having to specify 'mypackage' in the code, as the data could have nothing to do necessarally with this module, but i guess its a good compromise.
I could use importlib_resources or importlib.resources (depending on python version).
https://importlib-resources.readthedocs.io/en/latest/using.html
I think that you can basically give anything as an argument *data_files* to setup().
I use setuptools to distribute my python package. Now I need to distribute additional datafiles.
From what I've gathered fromt the setuptools documentation, I need to have my data files inside the package directory. However, I would rather have my datafiles inside a subdirectory in the root directory.
What I would like to avoid:
/ #root
|- src/
| |- mypackage/
| | |- data/
| | | |- resource1
| | | |- [...]
| | |- __init__.py
| | |- [...]
|- setup.py
What I would like to have instead:
/ #root
|- data/
| |- resource1
| |- [...]
|- src/
| |- mypackage/
| | |- __init__.py
| | |- [...]
|- setup.py
I just don't feel comfortable with having so many subdirectories, if it's not essential. I fail to find a reason, why I /have/ to put the files inside the package directory. It is also cumbersome to work with so many nested subdirectories IMHO. Or is there any good reason that would justify this restriction?
Option 1: Install as package data
The main advantage of placing data files inside the root of your Python package
is that it lets you avoid worrying about where the files will live on a user's
system, which may be Windows, Mac, Linux, some mobile platform, or inside an Egg. You can
always find the directory data relative to your Python package root, no matter where or how it is installed.
For example, if I have a project layout like so:
project/
foo/
__init__.py
data/
resource1/
foo.txt
You can add a function to __init__.py to locate an absolute path to a data
file:
import os
_ROOT = os.path.abspath(os.path.dirname(__file__))
def get_data(path):
return os.path.join(_ROOT, 'data', path)
print get_data('resource1/foo.txt')
Outputs:
/Users/pat/project/foo/data/resource1/foo.txt
After the project is installed as an Egg the path to data will change, but the code doesn't need to change:
/Users/pat/virtenv/foo/lib/python2.6/site-packages/foo-0.0.0-py2.6.egg/foo/data/resource1/foo.txt
Option 2: Install to fixed location
The alternative would be to place your data outside the Python package and then
either:
Have the location of data passed in via a configuration file,
command line arguments or
Embed the location into your Python code.
This is far less desirable if you plan to distribute your project. If you really want to do this, you can install your data wherever you like on the target system by specifying the destination for each group of files by passing in a list of tuples:
from setuptools import setup
setup(
...
data_files=[
('/var/data1', ['data/foo.txt']),
('/var/data2', ['data/bar.txt'])
]
)
Updated: Example of a shell function to recursively grep Python files:
atlas% function grep_py { find . -name '*.py' -exec grep -Hn $* {} \; }
atlas% grep_py ": \["
./setup.py:9: package_data={'foo': ['data/resource1/foo.txt']}
I Think I found a good compromise which will allow you to mantain the following structure:
/ #root
|- data/
| |- resource1
| |- [...]
|- src/
| |- mypackage/
| | |- __init__.py
| | |- [...]
|- setup.py
You should install data as package_data, to avoid the problems described in samplebias answer, but in order to mantain the file structure you should add to your setup.py:
try:
os.symlink('../../data', 'src/mypackage/data')
setup(
...
package_data = {'mypackage': ['data/*']}
...
)
finally:
os.unlink('src/mypackage/data')
This way we create the appropriate structure "just in time", and mantain our source tree organized.
To access such data files within your code, you 'simply' use:
data = resource_filename(Requirement.parse("main_package"), 'mypackage/data')
I still don't like having to specify 'mypackage' in the code, as the data could have nothing to do necessarally with this module, but i guess its a good compromise.
I could use importlib_resources or importlib.resources (depending on python version).
https://importlib-resources.readthedocs.io/en/latest/using.html
I think that you can basically give anything as an argument *data_files* to setup().