I just transitioned from pipenv to poetry and I'm having trouble importing a package from a local package I'm developing in a few of my scripts. To make this more concrete, my project looks something like:
pyproject.toml
poetry.lock
bin/
myscript.py
mypackage/
__init__.py
lots_of_stuff.py
Within myscript.py, I import mypackage. But when I poetry run bin/myscript.py I get a ModuleNotFoundError because the PYTHONPATH does not include the root of this project. With pipenv, I could solve that by specifying PYTHONPATH=/path/to/project/root in a .env file, which would be automatically loaded at runtime. What is the right way to import local packages with poetry?
I ran across this piece on using environment variables but export POETRY_PYTHONPATH=/path/to/roject/root doesn't seem to help.
After quite a bit more googling, I stumbled on the packages attribute within the tool.poetry section for pyproject.toml files. To include local packages in distribution, you can specify
# pyproject.toml
[tool.poetry]
# ...
packages = [
{ include = "mypackage" },
]
Now these packages are installed in editable mode :)
Adding a local package (in development) to another project can be done as:
poetry add ./my-package/
poetry add ../my-package/dist/my-package-0.1.0.tar.gz
poetry add ../my-package/dist/my_package-0.1.0.whl
If you want the dependency to be installed in editable mode you can specify it in the pyproject.toml file. It means that changes in the local directory will be reflected directly in environment.
[tool.poetry.dependencies]
my-package = {path = "../my/path", develop = true}
With current preview release (1.2.0a) a command line option was introduced, to avoid above manually steps:
poetry add --editable /path/to/package
Another ways of adding packages can be found on poetry add page
If the above doesn't work, you can take a look over additional steps detailed in this discussion
Related
I'm trying to develop a Python library which will eventually be put on PyPI.
It's a library I use in another project which pulls it from PyPI.
I have unit-tests for the library in its own project repository. But I mainly test the library in use through the main application.
I was previously "publishing" the library locally, using
pip install -e
so that the main project in another repository can pull it from local packages and I can test it in context.
But now I'm moving to pipenv. And I want to be able to do the same. But if I put the dependency in the Pipenv file, it seems to try to pull from the real PyPI, not my local store.
How do I set up this workflow with Pipenv?
Pipenv can install packages from various sources, not only from PyPI. The CLI usage is very similar to pip, which is a feature of pipenv. You can pass a local path or a URL with CVS prefix to pipenv install. Pipenv will add the package to Pipfile accordingly.
CLI Usage
First go to the project folder (which contains the Pipfile) of your main application. Then run
$ pipenv install --dev -e "/path/to/your/local/library"
If the library is version controlled by Git or SVN, you can also use an URL like this:
$ pipenv install --dev -e git+https://github.com/your_user_id/libraryname#develop
If the Git repository for your library is stored locally, use file:// instead of https://github.com. Other protocols like FTP and SSH are also supported.
The above command will pull the package from the source, install it and modify the Pipfile in the current folder to include the package.
Pipfile usage
Usually you do not need to modify the Pipfile directly. For advanced settings in the pipfile, please see the Pipfile's specs. Below are some example entries to pipfile
[dev-packages]
mylibrary = { git = 'https://github.com/xxx/mylibrary.git', ref = '0.0.1', editable = true }
"e1839a8" = {path = "/path/to/your/local/library2", editable = true}
"e51a27" = {file = "/path/to/your/local/library1/build/0.0.1.zip"}
Setup a private PyPI index
Although it would be overkill, just to be complete, setting up a private PyPI server can also work.
I'm trying out the one of the recommended python package layouts with the src directory.
My problem is when I run pytest from the command line my tests are not finding my python package. I tried running from the top level of this directory and within the tests directory but still getting ModuleNotFoundError exception. I'm running python 3.5 with pytest-3.5.0.
What is the recommend way to execute pytest from this type of python package layout?
If you're using py.test to run the tests, it won't find the application as you must have not installed it. You can use python -m pytest to run your tests, as python will auto-add your current working directory to your PATH.
Hence, using python -m pytest .. would work if you're doing it in the src directory
The structure that I normally use is:
root
├── packagename
│ ├── __init__.py
│ └── ...
└── tests
└── ...
This way when I simply run python -m pytest in the root directory it works fine. Read more at: https://stackoverflow.com/a/34140498/1755083
Second option, if you still want to run this with py.test you can use:
PYTHONPATH=src/ py.test
and run it from the root. This simply adds the src folder to your PYTHONPATH and now your tests will know where to find the packagename package.
The third option is to use pip with -e to install your package in the editable mode. This creates a link to the packagename folder and installs it with pip - so any modification done in packagename is immediately reflected in pip's installation (as it is a link). The issue with this is that you need to redo the installation if you move the repo and need to keep track of which one is installed if you have more than 1 clones/packages.
Your layout looks OK and is the today recommended.
But I assume your problem are the import in your test_*.py files. You shouldn't take care about from where to import your package in you unittests; just import them.
Install in Development Mode via --editable
How could this be done? You have to "install" your package in Developement Mode. Use the --editable option of pip. In that case not a real package is build and installed but only symlinks are used to expose your package folder (the source in developer version) to the operating system as it would be a real release package.
Now your unittest never need to care about where the package is installed and how to import it. Just import it because the package is known to the system.
Note: This is the today recommended way. Other "solutions" hacking around with manipulating sys.path or environment variables like PYTHONPATH are just from Python's early days and should be avoided today.
Just my two cents:
Don't use py.test, use pytest. The py.test is an old and deprecated command.
Notice Windows users; you may or may not have PYTHONPATH defined on your system.
In such a case, I used the following commands to run tests:
set PYTHONPATH=src
pytest
To examine the PYTHONPATH:
echo %PYTHONPATH%
src
And the directory structure on containing src folder:
setup.py
src
utils
algo.py
srel.py
__init__.py
tests
algo_test.py
srel_test.py
__init__.py
Finally, the documentation says that you might omit the __init__.py files. Removing them worked for me in this case.
I think the idea behind the src layout is to isolate the package code notably from the tests. The best way I see here is to install your package and test it as it will be delivered to future users.
Much more suitable for development. No need for hacks like import src or to modify the PYTHONPATH.
Create a virtual environment
Build your package
Install your package manually
Install your test dependencies (possibly different from the package dependencies)
Run pytest in the main directory
It works smoothly.
I have a local git repository on my machine, let's say under /develop/myPackage.
I'm currently developing it as a python package (a Django app) and I would like to access it from my local virtualenv. I've tried to include its path in my PYTHONPATH (I'm on a Mac)
export PATH="$PATH:/develop/myPackage"
The directory already contains a __init__.py within its root and within each subdirectory.
No matter what I do but I can't get it work, python won't see my package.
The alternatives are:
Push my local change to github and install the package within my virtualenv from there with pip
Activate my virtualenv and install the package manually with python setup.py install
Since I often need to make changes to my code the last two solution would require too much work all the time even for a small change.
Am I doing something wrong? Would you suggest a better solution?
Install it in editable mode from your local path:
pip install -e /develop/MyPackage
This actually symlinks the package within your virtualenv so you can keep on devving and testing.
The example you show above uses PATH, and not PYTHONPATH. Generally, the search path used by python is partially predicated on the PYTHONPATH environment variable (PATH has little use for this case.)
Try this:
export PYTHONPATH=$PYTHONPATH:/develop/myPackage
Though in reality, you likely want it to be pointing to the directory that contains your package (so you can do 'import myPackage', rather than importing things within the package. That being said, you likely want:
export PYTHONPATH=$PYTHONPATH:/develop/
Reference the python docs here for more information about Python's module/package search path: http://docs.python.org/2/tutorial/modules.html#the-module-search-path
By default, Python uses the packages that it was installed with as it's default path, and as a result PYTHONPATH is unset in the environment.
I'm using Git for version control on a Django project.
As much as possible, all the source-code that is not part of the project per se, but that the project depends on, is brought in as Git submodules. These live on a lib directory and have to be included on python path. The directory/files layout looks like:
.git
docs
lib
my_project
apps
static
templates
__init__.py
urls.py
manage.py
settings.py
.gitmodules
README
What, would you say, is the best practice for including the libs on python path?
I am using virtualenv, so I could easily sym-link the libraries to the virtualenv's site-packages directory. However, this will tie the virtualenv to this specific project. my understanding is that the virtualenv should not depend on my files. instead, my files should depend on the virtualenv.
I was thinking of using the same virtualenv for different local copies of this project, but if I do things this way I will lose that capability. Any better idea how to approach this?
Update:
The best solution turned out being to let pip manage all the dependencies.
However, this means not being able to use git submodules, as pip can't yet handle relative paths properly. So, external dependencies will have to live on the virtualenv (typically: my_env/src/a_python_module).
I'd still prefer to use submodules, to have some of the dependencies living on my project tree. This makes more sense to me as I've already needed to fork those repos to change some bits of them, and will likely have to change them some more in the future.
dump all your installed packages in a requirement file (requirements.txt looks the standard naming) using
pip freeze > requirements.txt
everytime you need a fresh virtualenv you just have to do:
virtualenv <name> --no-site-packages
pip install -r requirements.txt
the install -r requirements.txt works great also if you want to update to newer packages
just keep requirements.txt in sync with your packages (by running pip freeze every time something changes) and you're done, no matter how many virtualenv you have.
NOTE: if you need to do some development on a package you can install that using the -e (editable) param, this way you can edit the package and you don't have to uninstall/install every time you want to test new stuff :)
I'd like to start developing an existing Python module. It has a source folder and the setup.py script to build and install it. The build script just copies the source files since they're all python scripts.
Currently, I have put the source folder under version control and whenever I make a change I re-build and re-install. This seems a little slow, and it doesn't settle well with me to "commit" my changes to my python install each time I make a modification. How can I cause my import statement to redirect to my development directory?
Use a virtualenv and use python setup.py develop to link your module to the virtual Python environment. This will make your project's Python packages/modules show up on the sys.path without having to run install.
Example:
% virtualenv ~/virtenv
% . ~/virtenv/bin/activate
(virtenv)% cd ~/myproject
(virtenv)% python setup.py develop
Virtualenv was already mentioned.
And as your files are already under version control you could go one step further and use Pip to install your repo (or a specific branch or tag) into your working environment.
See the docs for Pip's editable option:
-e VCS+REPOS_URL[#REV]#egg=PACKAGE, --editable=VCS+REPOS_URL[#REV]#egg=PACKAGE
Install a package directly from a checkout. Source
will be checked out into src/PACKAGE (lower-case) and
installed in-place (using setup.py develop).
Now you can work on the files that pip automatically checked out for you and when you feel like it, you commit your stuff and push it back to the originating repository.
To get a good, general overview concerning Pip and Virtualenv see this post: http://www.saltycrane.com/blog/2009/05/notes-using-pip-and-virtualenv-django
Install the distrubute package then use the developer mode. Just use python setup.py develop --user and that will place path pointers in your user dir location to your workspace.
Change the PYTHONPATH to your source directory. A good idea is to work with an IDE like ECLIPSE that overrides the default PYTHONPATH.