I'm trying to develop a Python library which will eventually be put on PyPI.
It's a library I use in another project which pulls it from PyPI.
I have unit-tests for the library in its own project repository. But I mainly test the library in use through the main application.
I was previously "publishing" the library locally, using
pip install -e
so that the main project in another repository can pull it from local packages and I can test it in context.
But now I'm moving to pipenv. And I want to be able to do the same. But if I put the dependency in the Pipenv file, it seems to try to pull from the real PyPI, not my local store.
How do I set up this workflow with Pipenv?
Pipenv can install packages from various sources, not only from PyPI. The CLI usage is very similar to pip, which is a feature of pipenv. You can pass a local path or a URL with CVS prefix to pipenv install. Pipenv will add the package to Pipfile accordingly.
CLI Usage
First go to the project folder (which contains the Pipfile) of your main application. Then run
$ pipenv install --dev -e "/path/to/your/local/library"
If the library is version controlled by Git or SVN, you can also use an URL like this:
$ pipenv install --dev -e git+https://github.com/your_user_id/libraryname#develop
If the Git repository for your library is stored locally, use file:// instead of https://github.com. Other protocols like FTP and SSH are also supported.
The above command will pull the package from the source, install it and modify the Pipfile in the current folder to include the package.
Pipfile usage
Usually you do not need to modify the Pipfile directly. For advanced settings in the pipfile, please see the Pipfile's specs. Below are some example entries to pipfile
[dev-packages]
mylibrary = { git = 'https://github.com/xxx/mylibrary.git', ref = '0.0.1', editable = true }
"e1839a8" = {path = "/path/to/your/local/library2", editable = true}
"e51a27" = {file = "/path/to/your/local/library1/build/0.0.1.zip"}
Setup a private PyPI index
Although it would be overkill, just to be complete, setting up a private PyPI server can also work.
Related
I just transitioned from pipenv to poetry and I'm having trouble importing a package from a local package I'm developing in a few of my scripts. To make this more concrete, my project looks something like:
pyproject.toml
poetry.lock
bin/
myscript.py
mypackage/
__init__.py
lots_of_stuff.py
Within myscript.py, I import mypackage. But when I poetry run bin/myscript.py I get a ModuleNotFoundError because the PYTHONPATH does not include the root of this project. With pipenv, I could solve that by specifying PYTHONPATH=/path/to/project/root in a .env file, which would be automatically loaded at runtime. What is the right way to import local packages with poetry?
I ran across this piece on using environment variables but export POETRY_PYTHONPATH=/path/to/roject/root doesn't seem to help.
After quite a bit more googling, I stumbled on the packages attribute within the tool.poetry section for pyproject.toml files. To include local packages in distribution, you can specify
# pyproject.toml
[tool.poetry]
# ...
packages = [
{ include = "mypackage" },
]
Now these packages are installed in editable mode :)
Adding a local package (in development) to another project can be done as:
poetry add ./my-package/
poetry add ../my-package/dist/my-package-0.1.0.tar.gz
poetry add ../my-package/dist/my_package-0.1.0.whl
If you want the dependency to be installed in editable mode you can specify it in the pyproject.toml file. It means that changes in the local directory will be reflected directly in environment.
[tool.poetry.dependencies]
my-package = {path = "../my/path", develop = true}
With current preview release (1.2.0a) a command line option was introduced, to avoid above manually steps:
poetry add --editable /path/to/package
Another ways of adding packages can be found on poetry add page
If the above doesn't work, you can take a look over additional steps detailed in this discussion
I created a package, containing Pipfile, and I want to test with docker.
I want to install packages written in Pipfile with pip, without creating virutalenv.
# (do something to create some-file)
RUN pip install (some-file)
How to do?
Eventually pip should be able to do this itself, at least that's what they say. Currently, that is not yet implemented.
For now, a Pipfile is a TOML file, so you can use a TOML parser to extract the package constraints and emit them in a format that will be recognized by pip. For example, if your Pipfile contains only simple string version specifiers, this little script will write out a requirements.txt file that you can then pass to pip install -r:
import sys
import toml
with open(sys.argv[1]) as f:
result = toml.load(f)
for package, constraint in result['packages'].items():
if constraint == '*':
print(package)
else:
print(f'{package} {constraint}')
If your Pipfile contains more complicated constructs, you'll have to edit this code to account for them.
An alternative that you might consider, which is suitable for a Docker container, is to use pipenv to install the packages into the system Python installation and just remove the generated virtual environment afterwards.
pipenv install --system
pipenv --rm
However, strictly speaking that doesn't achieve your stated goal of doing this without creating a virtualenv.
One of the other answers lead to me to this, but wanted to explicitly call it out, and why it's a useful solution.
Pipenv is useful as it helps you create a virtual environment. This is great on your local dev machine as you will often have many projects, with different dependencies etc.
In CICD, you will be using containers that are often are only spun up for a few minutes to complete part of your CICD pipeline. Since you will spin up a new container each time you run your pipeline, there is no need to create a virtual environment in your container to keep things organised. You can simply install all your dependencies directly to the main OS version of python.
To do this, run the below command in your CICD pipeline:
pipenv install --system
I have a custom Python package (call it MyProject) on my filesystem with a setup.py and a requirements.txt. This package needs to be used by a Flask server (which will be deployed on AWS/EC2/EB).
In my Flask project directory, I create a virtualenv and run pip install -e ../path/to/myProject.
But for some reason, MyProject's upstream git repo shows up in pip freeze:
...
llvmlite==0.19.0
-e git+https://github.com/USERNAME/MYPROJECT.git#{some-git-hash}
python-socketio==1.8.0
...
The reference to git is a problem, because the repository is private and the deployment server does not (and should not, and will never) have credentials to access it. The deployment server also doesn't even have git installed (and it seems extremely problematic that pip assumes without my permission that it does). There is nothing in MyProject's requirements.txt or setup.py that alludes to git, so I am not sure where the hell this is coming from.
I can dupe the project to a subdirectory of the Flask project, and then put the following in MyFlaskProject's requirements.txt:
...
llvmlite==0.19.0
./MyProject
python-socketio==1.8.0
...
But this doesn't work, because the path is taken as relative to the working directory of the pip process when it is run, not to requirements.txt. Indeed, it seems pip is broken in this respect. In my case, EC2 runs its install scripts from some other directory (with a full path to requirements.txt specified), and as expected, this fails.
What is the proper way to deploy a custom python package as a dependency of another project?
To install your own python package from a git repo you might want to check this post
To sort out the credential issue, why not having git installed on the EC2? You could simply create an ssh key and share it with MyProject repository.
I am using this solution on ECS instances deployed by Jenkins (with Habitus to hide Jenkin's ssh keys while building the image) and it works fine for me!
I'm using Git for version control on a Django project.
As much as possible, all the source-code that is not part of the project per se, but that the project depends on, is brought in as Git submodules. These live on a lib directory and have to be included on python path. The directory/files layout looks like:
.git
docs
lib
my_project
apps
static
templates
__init__.py
urls.py
manage.py
settings.py
.gitmodules
README
What, would you say, is the best practice for including the libs on python path?
I am using virtualenv, so I could easily sym-link the libraries to the virtualenv's site-packages directory. However, this will tie the virtualenv to this specific project. my understanding is that the virtualenv should not depend on my files. instead, my files should depend on the virtualenv.
I was thinking of using the same virtualenv for different local copies of this project, but if I do things this way I will lose that capability. Any better idea how to approach this?
Update:
The best solution turned out being to let pip manage all the dependencies.
However, this means not being able to use git submodules, as pip can't yet handle relative paths properly. So, external dependencies will have to live on the virtualenv (typically: my_env/src/a_python_module).
I'd still prefer to use submodules, to have some of the dependencies living on my project tree. This makes more sense to me as I've already needed to fork those repos to change some bits of them, and will likely have to change them some more in the future.
dump all your installed packages in a requirement file (requirements.txt looks the standard naming) using
pip freeze > requirements.txt
everytime you need a fresh virtualenv you just have to do:
virtualenv <name> --no-site-packages
pip install -r requirements.txt
the install -r requirements.txt works great also if you want to update to newer packages
just keep requirements.txt in sync with your packages (by running pip freeze every time something changes) and you're done, no matter how many virtualenv you have.
NOTE: if you need to do some development on a package you can install that using the -e (editable) param, this way you can edit the package and you don't have to uninstall/install every time you want to test new stuff :)
I'd like to start developing an existing Python module. It has a source folder and the setup.py script to build and install it. The build script just copies the source files since they're all python scripts.
Currently, I have put the source folder under version control and whenever I make a change I re-build and re-install. This seems a little slow, and it doesn't settle well with me to "commit" my changes to my python install each time I make a modification. How can I cause my import statement to redirect to my development directory?
Use a virtualenv and use python setup.py develop to link your module to the virtual Python environment. This will make your project's Python packages/modules show up on the sys.path without having to run install.
Example:
% virtualenv ~/virtenv
% . ~/virtenv/bin/activate
(virtenv)% cd ~/myproject
(virtenv)% python setup.py develop
Virtualenv was already mentioned.
And as your files are already under version control you could go one step further and use Pip to install your repo (or a specific branch or tag) into your working environment.
See the docs for Pip's editable option:
-e VCS+REPOS_URL[#REV]#egg=PACKAGE, --editable=VCS+REPOS_URL[#REV]#egg=PACKAGE
Install a package directly from a checkout. Source
will be checked out into src/PACKAGE (lower-case) and
installed in-place (using setup.py develop).
Now you can work on the files that pip automatically checked out for you and when you feel like it, you commit your stuff and push it back to the originating repository.
To get a good, general overview concerning Pip and Virtualenv see this post: http://www.saltycrane.com/blog/2009/05/notes-using-pip-and-virtualenv-django
Install the distrubute package then use the developer mode. Just use python setup.py develop --user and that will place path pointers in your user dir location to your workspace.
Change the PYTHONPATH to your source directory. A good idea is to work with an IDE like ECLIPSE that overrides the default PYTHONPATH.