File tree:
/path/to/project/root/
├── .git/
├── Pipfile
├── Pipfile.lock
├── __init__.py
└── package1
├── __init__.py
├── src
│ ├── __init__.py
│ ├── external_lib
│ └── packageX
│ ├── packageY
│ └── packageZ
│ ├── foo.py
│ ├── bar.py
Constraints:
foo.py imports bar.py
both foo.py and bar.py import some files from external_lib/packageX/packageZ
files in external_lib/packageX/packageZ require packageX and packageY to be in the PYTHONPATH for their own includes.
I don't have a single entry point: sometimes I want to launch foo.py, sometimes bar.py.
external_lib is external, it's a git submodule. I don't want to change its content.
This is a shared project, I'd like a solution that's included in the git repository...
What I'm doing right now
I'm following the advice given here (in French though), that claims to be good practice:
Before running any script, I run cd package1/src/external_lib/packageX; export PYTHONPATH=$PYTHONPATH:pwd:pwd/packageY; cd ../../../..
All my imports are written like this: import package1.src.bar, import package1.src.external_lib.packageX.packageZ.some_module
I always call all scripts from the project root.
This gives something like this:
cd /path/to/project/root/
pipenv shell
cd package1/src/external_lib/packageX; export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/packageY; cd ../../../..
python package1/src/foo.py
python package1/src/bar.py
Question
What is the best practice, in terms of project architecture, Pipenv use, import writing, etc., to manage all the imports smoothly, with a minimum amount of manual actions and constraints (such as export PYTHONPATH=... commands before running scripts and launching the scripts from the root), in the most Pythonic and Pipenv-ic way ?
Ideally I'd also like Pycharm to be able to get all the imports correctly without further manual settings, so that what I do in Pycharm reflects what would happen from a terminal.
One possible solution I've seen is to use Pipenv's .env file, to add the export PYTHONPATH=... in it, and rely on its automatic loading. However, apparently this file is not meant for that and should not be commited.
Related
I always have the same problem and I finally want to get rid of it. My folder structure looks like this
project
├── scipts
│ └── folder
│ └── file.py
└── submodules
└── lab_devices
└── optical_devices
└── __init__.py
└── powermeter_driver.py
I now want to include the powermeter_driver.py in file.py. So what I do in the file.py is:
from submodules.lab_devices.optical_devices.powermeter_driver import PowermeterDriver
but this gives ModuleNotFoundError: No module named 'submodules'. I don't want to use
import sys
sys.path.insert(0, '../submodules')
Is there an easy workaround?
The imports will be resolved correctly if you run the script in the correct way, which is from the parent directory and using the -m switch. So you should cd in the parent folder, add __init__.py files as in:
project
├── scripts
└── __init__.py
│ └── folder
└── __init__.py
│ └── file.py
└── submodules
└── __init__.py
└── lab_devices
└── __init__.py
└── optical_devices
└── __init__.py
└── powermeter_driver.py
so that python knows these are packages then run
python -m scripts.folder.file # note no .py
In file.py you can then use the absolute import as you are cause submodules will be detected as a package. You should indeed avoid hacking the sys.path by all means.
You need to consider that if you write from submodules.... this is an absolute import. It means Python starts searching for the submodules in all directories in sys.path. Python usually adds your current working directory as first item to sys.path, so if you cd to your project directory and then run it as a module using python -m it could work.
Of course absolute imports suck if you have files in a relative location to each other. I've had similar issues and I've created an experimental, new import library ultraimport that allows to do file system based imports. It could solve your issue if you are willing to add a new library for this.
Instead of:
from submodules.lab_devices.optical_devices.powermeter_driver import PowermeterDriver
In file.py you would then write:
import ultraimport
PowermeterDriver = ultraimport('__dir__/../../submodules/lab_devices/optical_devices/powermeter_driver.py', 'PowermeterDriver')
The file path is relative to file.py and thus this will always work, no matter how you run your code or what is in sys.path.
One caveat when importing scripts like this is if they contain further relative imports. ultraimport has a builtin preprocessor to rewrite subsequent relative imports so they continue to work.
I would like to integrate pytest into my workflow. I made a following folder structure:
myproject
├── venv
└── src
├── __init__.py
├── foo
│ ├── __init__.py
│ └── bar.py
└── tests
├── __init__.py
└── test_bar.py
I would like to be able to import the namespace from the foo package so that I can write test scripts in the tests folder. Whenever I try to run pytest, or pytest --import-mode append I always get the following error:
ModuleNotFoundError: No module named 'foo'
I found this similar question here but adding the __init__.py files to the tests and the src folder does not solve the issue.
Does this have to do with the PYTHONPATH system variable? This folder structure works perfectly if I run the __main__.py from the src folder, but fails when I want to use pytest. Is there a way to do this without having to mess with PYTHONPATH or are there automated ways to edit the system variable?
I have a silly question, but I haven't found any mention of it so far. I created a .py file in Python containing all of my functions that I am using for a job. Whenever I need to use them in a script, I have to address the path to the folder where the .py file is located, like the script below.
import os
os.chdir('...\\path-to-my-file')
import my-file as mfl
My question is: is there any way I can save the .py file with my functions right at the root of Anaconda and call it the same way I call Numpy, for example? I know that the libraries are in the 'C:\Users\User\anaconda3\Lib' folder, so I could save directly to that folder and call it in a more simplified way, like:
import numpy as np
import my-file as mfl
If this is feasible, would there be a standardized way to write the code?
In order to be able to import mypackage the same way you do with any other module, the correct approach is to use pip locally:
python -m pip install -e /path_to_package/mypackage/
python -m ensures you are using the pip package from the same python installation you are currently using.
-e makes it editable, i/e import mypackage will reload after you make some changes, instead of using the cached one.
mypackage must be an installable package, i/e contain an __init__.py
file, and a basic setup.py (or pyproject.toml file for pipenv)
minimal setup.py
from setuptools import find_packages, setup
setup(
name='mypackage', # Required
version='0.0.1', # Required
packages=find_packages(), # Required
)
the package structure must be like this:
mypackage/
setup.py
mypackage/ <----- this is a folder inside the other `mypackage/` folder
__init__.py
or as a tree:
└── python_perso folder
└── mypackage folder
├── mypackage folder
│ └── __init__.py
└── setup.py
[edit] after installation, the directory will look like this:
(for a package named mypackage)
└── python_perso
└── mypackage
├── mypackage
│ ├── __init__.py
│ └── __pycache__
│ └── __init__.cpython-38.pyc
├── mypackage.egg-info
│ ├── PKG-INFO
│ ├── SOURCES.txt
│ ├── dependency_links.txt
│ └── top_level.txt
└── setup.py
5 directories, 7 files
I would suggest you create an Environment Variable called PYTHONPATH which will point you to additional functions. It is better to leave your anaconda root as it is if you are unsure of what you are doing. More on this here.
I'm a beginner and I'm not understanding something in the project folder structure.
I have this project:
.
└── convertersProject/
├── conftest.py -----------------> best practice if you're not using src structure
├── converters -----------------> module I'm trying to import/
│ ├── __init__.py
│ ├── binconverter.py
│ └── importester1.py --------> import binconverter OR from converters import */
│ └── submodule/
│ ├── __init__.py -----> not needed if I don't want to use it as package
│ └── importester2.py -> from converter import binconverter
├── outsidemodule /
│ ├── importester3.py
│ └── __init__.py
└── test /
└── test_converters.py
I'm receiving always ModuleNotErrorFound when trying to execute importester1/2/3.py directly from project folder. I'm using a virtual enviroment, setting up with python -m venv 'name' from a pyenv python 3.8.5 set with pyenv shell 3.8.5
What I think I'm understanding:
I have to use absolute paths as from converters.binconverter import bin2dec being bin2dec fucntion in binconverter. If I'd want to use relative, I should be inside the folder tree as trying to execute importertest2.py, because submodule is inside converters. So, I coudn't use relatives for outsidemodule
PYTHONPATH is the current folder you're executing, so if I executing from project folder as python converters/submodule/importester2.py, I don't have to append any value to the PYTHONPATH of the virtual enviroment (indeed, I've read it's not a good practice)
__init__.py allows you to use module without appending values to PYTHONPATH virtualenv, so I could import the converter module into outsidemodule using absolute paths.
tests are working using this logic. In fact if I change something the VSCode debugger detect import problems.
What the hell am I missing?
I am currently developing a package simultaneously with a few projects that use the package, and I'm struggling to figure out how to structure my directory tree and imports.
Ideally, I want something like this:
main_directory
├── shared_package
│ ├── __init__.py
│ ├── package_file1.py
│ └── package_file2.py
├── project1
│ ├── main.py
│ ├── module1.py
│ └── other_package
│ ├── __init__.py
│ └── other_package_file.py
└── project2
└── ...
I can't figure out how to make the imports work cleanly for importing shared_package from python files in project1. Is there a preferred way to do this?
Any help would be appreciated!
shared_package will eventually be standalone. Other people will import and install it the normal way, and it'll be stored with the rest of the python modules in site-packages or wherever.
To replicate this, I recommend just updating your PYTHONPATH to point to main_directory (or wherever you put shared_package anyway) - that way,
import shared_package
will still work fine for the code if shared_package was installed normally, because it's on the pythonpath either way.
Note that PYTHONPATH is an environment variable, so the means for doing this will vary based on your operating system. Regardless, a quick search for how to modify the variable permanently on your OS should be easy.