I've got a simple project that I want to be able to be pip installed - and where the 'development time' dependencies are the same as the 'use time' dependencies.
To avoid duplication between setup.py and requirements.txt - I'm using the 'trick' to have my requirements.txt simply contain . (to pick up dependencies from setup.py).
When working on the project, we can then clone it and pip install -r requirements.txt.
The problem I'm having with this is that this also seems to end up installing the actual project itself (presumably due to going through setup.py?)
Is there an option I can use with pip install to prevent this? Or some other way to work around it without ending up installing the project itself?
Related
Is it possible to install a pip package in a way so that it gets not listed when doing pip freeze > requirements.txt?
I am thinkging of an equivalent to: poetry add --dev which adds (installs) a package as a development dependency, but it does not appear in dependency list.
Is there a way in pip to do something similar?
What you want is pipenv.
There are ways of making RStudio work with pipenv (link to an article).
This allows both complete package control, python version specification for a project as well as virtualenv, all in one.
Otherwise, you'd have to maintain your requirements.txt file manually, and further down the line use a constraints.txt file, also.
Think of pipenv files as what yarn.lock files (JS) vs package.json file + some extra sweet features.
You can use pipenv to generate a requirements.txt file by doing:
pipenv lock -r > requirements.txt
While you can add/install packages in development mode by:
pipenv install --dev <mypackage>
In my requirements.txt I have packages defined in following manner:
Django ~= 2.2.0
It means that when I use pip install -r requirements.txt pip will find the latest available 2.2.x version and install it along with all dependencies.
What I need is requirements-formatted list of all packages with explicit versions that will be installed but without actually installing any packages. So example output would be something like:
Django==2.2.23
package1==0.2.1
package2==1.4.3
...
So in other words I'm looking for something like pip freeze results but without installing anything.
pip-compile is what you need!
Doc: https://github.com/jazzband/pip-tools)
python -m pip install pip-tools
pip-compile requirements.txt --output-file requirements-all.txt
The pip-compile command lets you compile a requirements.txt file from your dependencies, this way you can pip install you dependencies to always have the same environment
TL;DR
Try pipdetree or pip-tree.
Explanation
pip, contrary to most package managers, doesn't have a big dependency graph to look up. What it does is that it lets arbitrary setup code to be executed, which automatically pulls the dependencies. This means that, for example, a package could manage their dependencies in an other way than putting them in requirements.txt (see fastai for an example of a project that handles the dependencies differently).
So, there is, theoretically, no other way to see all the dependencies than to actually run an install on an isolated environment, see what was pulled, then delete the environment (because it could potentially be the same part of the code that does the installation and that brings the dependencies). You could actually do that with venv.
In practice, tools like pipdetree or pip-tree fetch the dependencies based on some standardization of the requirements (most packages separate the dependencies and the installation, and actually let pip handle both).
I would like to understand current state of Python build systems and requirements management.
Imagine, that I checked out sources of some project that is using poetry (or pipenv). And this project has pyproject.toml file with build system specified. Of course I can look into pyproject, see that this one is using Poetry, install poetry and run poetry install, but I would like to avoid it.
Question: Is there a build-system-agnostic way to build Python project?
By "build" I mean install all the necessary requirements for the project to be run in-place.
With requirements.txt I would achieve that by running pip install -r requirements.txt.
To install the project (and its dependencies), recent versions of pip are perfectly capable of doing this:
path/to/python -m pip install path/to/project
or
path/to/python -m pip install --editable path/to/project
To build distributions of the project, currently build is the only build back-end agnostic tool I know of:
python -m build
For the content of the pyproject.toml itself, see:
https://stackoverflow.com/a/64151860
This is not so much of a problem but I have been adding requirements.txt everytime I start a new project and deploy it on heroku. I don't understand the purpose of requirements.txt though and why it is so important.
Consider the scenario where u have installed 2 or 3 packages during the development cycle of 1/2 months.
During deployment, you forget to install one of them explicitly. At this time requirements.txt will come very handy.
So whenever you install a new package make sure add that package name in requirements.txt
And during deployment just running a single command like
pip install -r requirements.txt
Will install all dependencies of your project.
And also useful when you clone the repo in new location.
I'm writing a Python app to deploy on Heroku. Per Heroku's guide, I need to list package requirements in a Pip requirements.txt file. The guide instructs me to install the packages locally, then run pip freeze > requirements.txt to write the frozen requirements file.
However, one of the packages I want to use in deployment on Heroku can't be installed locally. It's incompatible with my operating system.
So how do I write a requirements.txt including this package suitable for Heroku?
The only way I can think of is to write it by hand - but this would be tedious, because the package in question has many dependencies of its own. Besides, this defeats the point of the package manager.
When deploying Ruby apps to Heroku, Bundler makes this easy. In my Gemfile I write
gem "pg", :group => :production
gem "sqlite3", :group => :development
The command bundle install then writes a frozen version list Gemfile.lock (analogous to requirements.txt). It doesn't install the packages listed under the 'production' group, but it still freezes a consistent list of versioned packages.
Example: Gemfile and Gemfile.lock
You can have more than one file, and call them different things, but Heroku does expect a requirements.txt. For instance, for dev, you could maintain a dev_requirements.txt
Locally you can run:
$ pip freeze > dev_requirements.txt
etc, and
$ pip install -r dev_requirements.txt
and Heroku will run:
$ pip install -r requirements.txt
It's not possible. Issue reported to pip https://github.com/pypa/pip/issues/747