How to install from requirements.txt [duplicate] - python

This question already has answers here:
How can I install packages using pip according to the requirements.txt file from a local directory?
(19 answers)
Closed 6 months ago.
I have to install python packages from requirements files that's provided to me. However, when I use pip install -r requirements.txt command I get an error saying ERROR: Invalid requirement (from line 3 in requirements.txt. And when I comment the third line the error just continues to be there for the next lines. What does that mean and how can I install packages from the file?
Here's how the file contents look like:
# Name Version Build Channel
alabaster 0.7.12 py36_0
altgraph 0.17 pypi_0 pypi
appdirs 1.4.4 py_0
argh 0.26.2 py36_0
astroid 2.4.2 py36_0
async_generator 1.10 py36h28b3542_0
atomicwrites 1.4.0 py_0
attrs 20.3.0 pyhd3eb1b0_0
auto-py-to-exe 2.7.11 pypi_0 pypi
autopep8 1.5.4 py_0
babel 2.9.0 pyhd3eb1b0_0
backcall 0.2.0 py_0
bcrypt 3.2.0 py36he774522_0
black 19.10b0 py_0
bleach 3.2.2 pyhd3eb1b0_0
bottle 0.12.19 pypi_0 pypi
... So on
I am using new environment in Anaconda with python version 3.6.12.

First, freeze all of your pip packages in the requirements.txt file using the command
pip freeze > requirements.txt
This should create the requirements.txt file in the correct format. Then try installing using the command
pip install -r requirements.txt
Make sure you're in the same folder as the file when running this command.
If you get some path name instead of the version number in the requirements.txt file, use this pip command to work around it.
pip list --format=freeze > requirements.txt

Change your requirements.txt content as below and try pip install -r requirements.txt again.
alabaster==0.7.12
altgraph==0.17
appdirs== 1.4.4
argh==0.26.2
astroid== 2.4.2
async_generator==1.10
atomicwrites==1.4.0
attrs==20.3.0
auto-py-to-exe==2.7.11
autopep8==1.5.4
babel==2.9.0
backcall==0.2.0
bcrypt==3.2.0
black==19.10b0
bleach==3.2.2
bottle==0.12.19

If you use Anaconda for environment management you most likely created requirements.txt file via:
conda list --explicit > requirements.txt
To recreate the environment with all your listed packages use:
conda env create --file requirements.txt
See CONDA CHEAT SHEET.

Related

Why does this happens when downgrading the library

Why does this happens when downgrading the library
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
pyppeteer 0.2.5 requires importlib-metadata<3.0.0,>=2.1.1; python_version < "3.8", but you have importlib-metadata 4.12.0 which is incompatible.
pyppeteer 0.2.5 requires urllib3<2.0.0,>=1.25.8, but you have urllib3 1.24.3 which is incompatible.
Successfully installed appdirs-1.4.4 attrs-22.1.0 black-22.1.0 click-8.1.3 importlib-metadata-4.12.0 pathspec-0.10.1 regex-2022.8.17 toml-0.10.2 typed-ast-1.5.4 typing-extensions-4.3.0 zipp-3.8.1
WARNING: You are using pip version 22.0.4; however, version 22.2.2 is available.
You should consider upgrading via the '/Users/admin/.local/share/virtualenvs/scrapers-XJakbnzz/bin/python -m pip install --upgrade pip' command.
This is a conflict of package versions. To fix it, you'll need to follow these steps:
1)pip list
2)copy all of your packages
3)create a new file called requirements.txt
4)put all of your packages in there (without versions). The file should look like this:
autopep8
certifi
chardet
charset-normalizer
click
colorama
discord
Flask
itsdangerous
pip
praw
(except this has to be a list of your packages that you got while running pip list)
5)pip install -r requirements.txt
This will tell pip to solve the version conflict.

Tensorflow Object Detection Api M1 Macbook Conflict Error

Machine: MacBook Air M1 2020
OS: macOs BigSur 11.4
Python version of venv: Python 3.8.6
Tensorflow version: ATF Apple Tensorflow 0.1a3
Pip version: 21.2.4
I have installed Tensorflow from github using this guide.
Now, my pip list is this.
Package Version
----------------------- ---------
absl-py 0.13.0
appnope 0.1.2
astunparse 1.6.3
backcall 0.2.0
cached-property 1.5.2
cachetools 4.2.2
certifi 2021.5.30
charset-normalizer 2.0.4
cycler 0.10.0
Cython 0.29.24
debugpy 1.4.1
decorator 5.0.9
entrypoints 0.3
flatbuffers 2.0
gast 0.5.2
google-auth 1.35.0
google-auth-oauthlib 0.4.5
google-pasta 0.2.0
grpcio 1.33.2
h5py 2.10.0
idna 3.2
ipykernel 6.2.0
ipython 7.26.0
ipython-genutils 0.2.0
jedi 0.18.0
jupyter-client 7.0.1
jupyter-core 4.7.1
Keras-Preprocessing 1.1.2
kiwisolver 1.3.1
Markdown 3.3.4
matplotlib 3.4.3
matplotlib-inline 0.1.2
nest-asyncio 1.5.1
numpy 1.18.5
oauthlib 3.1.1
opt-einsum 3.3.0
packaging 21.0
parso 0.8.2
pexpect 4.8.0
pickleshare 0.7.5
Pillow 8.3.1
pip 21.2.4
prompt-toolkit 3.0.20
protobuf 3.17.3
ptyprocess 0.7.0
pyasn1 0.4.8
pyasn1-modules 0.2.8
Pygments 2.10.0
pyparsing 2.4.7
python-dateutil 2.8.2
pyzmq 22.2.1
requests 2.26.0
requests-oauthlib 1.3.0
rsa 4.7.2
setuptools 57.4.0
six 1.16.0
tensorboard 2.6.0
tensorboard-data-server 0.6.1
tensorboard-plugin-wit 1.8.0
tensorflow-addons 0.1a3
tensorflow-estimator 2.6.0
tensorflow-hub 0.12.0
tensorflow 0.1a3
termcolor 1.1.0
tornado 6.1
traitlets 5.0.5
typeguard 2.12.1
typing-extensions 3.10.0.0
urllib3 1.26.6
wcwidth 0.2.5
Werkzeug 2.0.1
wheel 0.37.0
wrapt 1.12.1
I want install Object Detection Api from Tensorflow in that link.
I cloned the repo and them I follow the guide. (Python Package Installation)
When I execute this command
python -m pip install --use-feature=2020-resolver .
It starts to download, and start a print very long errors.
At the end of the operations, it gives me this error.
Using cached scipy-1.2.3.tar.gz (23.3 MB)
Collecting pandas
Using cached pandas-1.3.2-cp38-cp38-macosx_11_0_arm64.whl
Collecting tf-models-official>=2.5.1
Using cached tf_models_official-2.6.0-py2.py3-none-any.whl (1.8 MB)
Collecting kaggle>=1.3.9
Using cached kaggle-1.5.12-py3-none-any.whl
Collecting py-cpuinfo>=3.3.0
Using cached py_cpuinfo-8.0.0-py3-none-any.whl
Requirement already satisfied: numpy>=1.15.4 in /Users/stefan/Desktop/Studio/TFOD/tf-m1/lib/python3.8/site-packages (from tf-models-official>=2.5.1->object-detection==0.1) (1.18.5)
Collecting opencv-python-headless
Using cached opencv_python_headless-4.5.3.56-cp38-cp38-macosx_11_0_arm64.whl (10.7 MB)
Collecting tf-models-official>=2.5.1
Using cached tf_models_official-2.5.1-py2.py3-none-any.whl (1.6 MB)
Collecting tensorflow-datasets
Using cached tensorflow_datasets-4.4.0-py3-none-any.whl (4.0 MB)
Collecting google-api-python-client>=1.6.7
Downloading google_api_python_client-2.18.0-py2.py3-none-any.whl (7.4 MB)
|████████████████████████████████| 7.4 MB 3.4 MB/s
Collecting oauth2client
Using cached oauth2client-4.1.3-py2.py3-none-any.whl (98 kB)
Collecting tensorflow-model-optimization>=0.4.1
Using cached tensorflow_model_optimization-0.6.0-py2.py3-none-any.whl (211 kB)
Collecting pyyaml>=5.1
Downloading PyYAML-5.4.1.tar.gz (175 kB)
|████████████████████████████████| 175 kB 31.3 MB/s
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... done
Collecting gin-config
Using cached gin_config-0.4.0-py2.py3-none-any.whl (46 kB)
Collecting sacrebleu
Using cached sacrebleu-2.0.0-py3-none-any.whl (90 kB)
INFO: pip is looking at multiple versions of <Python from Requires-Python> to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of object-detection to determine which version is compatible with other requirements. This could take a while.
ERROR: Cannot install object-detection because these package versions have conflicting dependencies.
The conflict is caused by:
tf-models-official 2.6.0 depends on tensorflow-text>=2.5.0
tf-models-official 2.5.1 depends on tensorflow-addons
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/user_guide/#fixing-conflicting-dependencies
I have the same issue installing the Object Detection API for Tensorflow 2 (OD API) from sources on my MacBook Air M1 2020. It starts to lookup/download all available dependencies with very long errors and after several hours the process drains all available RAM and forces the laptop to reboot. I think the problem is with incompatible dependencies for arm64. I tried to build/install OD API for Tensorflow 1 instead and it worked! I successfully trained a model with TensorFlow 2 and GPU enabled.
Use the tf1 folder when you installing the OD API instead of tf2:
cd models/research
# Compile protos.
protoc object_detection/protos/*.proto --python_out=.
# Install TensorFlow Object Detection API.
cp object_detection/packages/tf1/setup.py .
python -m pip install --use-feature=2020-resolver .
or just use this guide for installing OD API: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf1.md
By the way,
here is a working Tensorflow setup on Apple M1 silicon with the latest TensorFlow versions and Metal GPU acceleration: https://github.com/ctrahey/m1-tensorflow-config
the best guide for object detection: https://neptune.ai/blog/how-to-train-your-own-object-detector-using-tensorflow-object-detection-api
I successfully install it with.
python -m pip install --force --no-dependencies .
My list of commands for install correctly tf2.0 for m1
conda create —-name=tf-m1
conda activate tf-m1
conda install python=3.8.6 -y
sh Desktop/PATH TO GITHUB DIR OF TENSORFLOW MAC(i used 0.1a3)/install_venv.sh /Users/stefan/miniforge3/envs/tf-m1
python -m pip install --upgrade pip
pip install ipykernel jupyter
python -m ipykernel install --user --name=tensorflow-m1.0
Tensorflow Test : ok (import tensorflow as tf; print(tf.__version__))
NOW USE CONDA INSTALL
conda install -c conda-forge matplotlib -y
conda install -c conda-forge scikit-learn -y
conda install -c conda-forge opencv -y
conda install -c conda-forge pandas -y
Tensorflow Test : ok
cd Desktop/PATH/
mkdir -p Tensorflow/models
git clone https://github.com/tensorflow/models Tensorflow/models
cd Tensorflow/models/research && protoc object_detection/protos/*.proto --python_out=. && cp object_detection/packages/tf2/setup.py . && python -m pip install --force --no-dependencies .
Object detection api have some dependencies i had installed.
(Pyarrow and apache-beam are not supported at the moment, but I think this isn't essential for general working of api)
pip install tf-slim
pip install pycocotools
pip install lxml
pip install lvis
pip install contextlib2
pip install --no-dependencies tf-models-official
pip install avro-python3
pip install pyyaml
Pip install gin-config
I don't know if is it the perfect installation of Tensorflow and TensorFlow object-detection-api, but at the moment this worked for me.
Things should work better if you upgrade to OS Monterey and install conda from miniforge and the packages listed below.
As of Oct. 25, 2021 macOS 12 Monterey is generally available.
Upgrade your machine to Monterey.
If you have conda installed, uninstall it.
Then follow the instructions from Apple here.
Cleaned up below:
Download and install Conda from Miniforge:
chmod +x ~/Downloads/Miniforge3-MacOSX-arm64.sh
sh ~/Downloads/Miniforge3-MacOSX-arm64.sh
source ~/miniforge3/bin/activate
In a conda environment, install the TensorFlow dependencies, base TensorFlow, and TensorFlow metal:
conda install -c apple tensorflow-deps
pip install tensorflow-macos
pip install tensorflow-metal
You should be good to go.

How should I uninstall all packages I've installed via pip? My virtual environments were working fine until I started installing API packages

Im working on a 16in MacBook Pro and only rely on basic python3 with pip. How to go about cleaning things up so that I only with the necessities I can build up my environments again quickly as at this point I only have a Django and flask Env set up and don't mind recreating them.
My virtual environments previously had vary few things in them, my projects current don't require to much but at the end of a long day I started exploring API's and thought I had activated a new environment that created moments before just to contain any new packages. A few days later when attempting to update some models from Python shell i was faced with an error asking me to check my Django project settings but after much trouble shooting we discovered it was the that I didn't activate the environment when installing the packages.
I so far plan on just pip uninstalling one by one, but I don't want to remove the wrong thing and have much work to complete after undoing what broke.
asgiref 3.2.7
astroid 2.4.1
autopep8 1.5.3
bcrypt 3.1.7
cffi 1.14.0
click 7.1.2
cssselect 1.1.0
d 0.2.2
Django 3.0.8
Flask 1.1.2
isort 4.3.21
itsdangerous 1.1.0
Jinja2 2.11.2
lazy-object-proxy 1.4.3
lxml 4.5.1
Markdown 3.2.2
MarkupSafe 1.1.1
mccabe 0.6.1
pip 20.1.1
pycodestyle 2.6.0
pycparser 2.20
Pygments 2.6.1
pylint 2.5.2
pyquery 1.4.1
pytz 2020.1
setuptools 41.2.0
six 1.15.0
sqlparse 0.3.1
toml 0.10.1
Werkzeug 1.0.1
wrapt 1.12.1
Follow the below steps
Just make a list of the packages you want to remove.
save it in the txt file.
use below command
pip uninstall -r file_name.txt
If you want to remove all the packages except builtins. Use below steps
Run the below commands in your environment
pip freeze > dependencies.txt
Uninstall using below command
pip uninstall -r dependencies.txt
You could use a tool like pipdeptree or deptree to help you figure out what project is a dependency of which, and thus help you decide which ones you want to remove and which ones you want to keep.
But it might be easier to start with a fresh virtual environment and install only the things you need in a clean environment.

Can't install any pip package within my docker environment, since it won't be recognized

I am running a Cookiecutter django project in a docker environment and I would like to add new packages via pip. Specifically I want to add: djangorestframework-jwt
When I do:
docker-compose -f local.yml run --rm django pip install
it seems like it would be perfectly working because I get:
Successfully installed PyJWT-1.7.1 djangorestframework-jwt-1.11.0
Now the problem is that it doesn't install it. It doesn't appear when I run pip freeze, and also not in pip list
Then I tried to put it into my requirements.txt file and run it with:
docker-compose -f local.yml run --rm django pip install -r requirements/base.txt
Same result. It says that it is successfully installed but it is not. I thought it might be a problem with my django version and the package, but the same happens when I try to update my pip. It says it updated, but when I run pip install -upgrade pip I get again:
You should consider upgrading via the 'pip install --upgrade pip' command.
I'm running out of options.
My requirements:
-r ./base.txt
Werkzeug==0.14.1 # https://github.com/pallets/werkzeug
ipdb==0.11 # https://github.com/gotcha/ipdb
Sphinx==1.7.5 # https://github.com/sphinx-doc/sphinx
psycopg2==2.7.4 --no-binary psycopg2 # https://github.com/psycopg/psycopg2
# Testing
# ------------------------------------------------------------------------------
pytest==3.6.3 # https://github.com/pytest-dev/pytest
pytest-sugar==0.9.1 # https://github.com/Frozenball/pytest-sugar
# Code quality
# ------------------------------------------------------------------------------
flake8==3.5.0 # https://github.com/PyCQA/flake8
coverage==4.5.1 # https://github.com/nedbat/coveragepy
# Django
# ------------------------------------------------------------------------------
factory-boy==2.11.1 # https://github.com/FactoryBoy/factory_boy
django-debug-toolbar==1.9.1 # https://github.com/jazzband/django-debug-toolbar
django-extensions==2.0.7 # https://github.com/django-extensions/django-extensions
django-coverage-plugin==1.5.0 # https://github.com/nedbat/django_coverage_plugin
pytest-django==3.3.2 # https://github.com/pytest-dev/pytest-django
djangorestframework-jwt==1.11.0 # https://github.com/GetBlimp/django-rest-framework-jwt
Output of pip list:
Package Version
------------------------ --------
alabaster 0.7.12
argon2-cffi 18.1.0
atomicwrites 1.3.0
attrs 19.1.0
Babel 2.6.0
backcall 0.1.0
certifi 2019.3.9
cffi 1.12.2
chardet 3.0.4
coreapi 2.3.3
coreschema 0.0.4
coverage 4.5.1
decorator 4.4.0
defusedxml 0.5.0
Django 2.0.7
django-allauth 0.36.0
django-coverage-plugin 1.5.0
django-crispy-forms 1.7.2
django-debug-toolbar 1.9.1
django-environ 0.4.5
django-extensions 2.0.7
django-model-utils 3.1.2
django-redis 4.9.0
django-widget-tweaks 1.4.3
djangorestframework 3.8.2
docutils 0.14
factory-boy 2.11.1
Faker 1.0.4
flake8 3.5.0
idna 2.8
imagesize 1.1.0
ipdb 0.11
ipython 7.4.0
ipython-genutils 0.2.0
itypes 1.1.0
jedi 0.13.3
Jinja2 2.10
MarkupSafe 1.1.1
mccabe 0.6.1
more-itertools 6.0.0
oauthlib 3.0.1
packaging 19.0
parso 0.3.4
pexpect 4.6.0
pickleshare 0.7.5
Pillow 5.2.0
pip 19.0.3
pluggy 0.6.0
prompt-toolkit 2.0.9
psycopg2 2.7.4
ptyprocess 0.6.0
py 1.8.0
pycodestyle 2.3.1
pycparser 2.19
pyflakes 1.6.0
Pygments 2.3.1
pyparsing 2.3.1
pytest 3.6.3
pytest-django 3.3.2
pytest-sugar 0.9.1
python-dateutil 2.8.0
python-slugify 1.2.5
python3-openid 3.1.0
pytz 2018.5
redis 3.2.1
requests 2.21.0
requests-oauthlib 1.2.0
setuptools 40.8.0
six 1.12.0
snowballstemmer 1.2.1
Sphinx 1.7.5
sphinxcontrib-websupport 1.1.0
sqlparse 0.3.0
termcolor 1.1.0
text-unidecode 1.2
traitlets 4.3.2
Unidecode 1.0.23
uritemplate 3.0.0
urllib3 1.24.1
wcwidth 0.1.7
Werkzeug 0.14.1
wheel 0.33.1
Any help is highly appreciated! Thanks...
docker-compose run starts a new container and executes the command in it. When used with --rm flag the container gets removed after command completes.
What happens is you get a new container created, and packages installed, or pip upgraded, inside this container. Once the command completes the container is removed.
If later on you run something like docker-compose -f local.yml run --rm pip list a brand new container will get created and pip list executed inside it, showing no packages from previous run since they were installed in a different container, which is already removed.
A better way would be to create docker image that includes your application and install pip packages during docker build. You can check a sample in this question
This way any time you start a container from your image it will have all packages inside.

Capture snapshot of current python environment and recreate on another machine

I have an environment created using miniconda with python 3.6.8, called basepy_3_6_8.
I want to save the environment snapshot to a file and then recreate it later on another machine:
There are different commands to capture the environment snapshot, with slightly different outputs. Which of these can I use to guarantee that the exact environment used by the user is recreated in the target?
I was hoping pip freeze > requirements.txt and pip install -r requirements.txt would work independent of the source environment, but I noticed that pip freeze from within a conda environment does not capture the python version.
Here is the code to create the conda environment, and output of different commands:
$ conda create -n myenv python=3.6.8
$ conda activate myenv
(myenv)$ pip freeze
astroid==2.1.0
autopep8==1.4.3
certifi==2018.11.29
colorama==0.4.1
isort==4.3.4
lazy-object-proxy==1.3.1
mccabe==0.6.1
pycodestyle==2.4.0
pylint==2.2.2
six==1.12.0
typed-ast==1.1.1
wincertstore==0.2
wrapt==1.11.0
(myenv)$ pip list
Package Version
----------------- ----------
astroid 2.1.0
autopep8 1.4.3
certifi 2018.11.29
colorama 0.4.1
isort 4.3.4
lazy-object-proxy 1.3.1
mccabe 0.6.1
pip 18.1
pycodestyle 2.4.0
pylint 2.2.2
setuptools 40.6.3
six 1.12.0
typed-ast 1.1.1
wheel 0.32.3
wincertstore 0.2
wrapt 1.11.0
(myenv)$ conda list
# packages in environment at C:\Users\alias\AppData\Local\Continuum\miniconda3\envs\myenv:
#
# Name Version Build Channel
certifi 2018.11.29 py36_0
pip 18.1 py36_0
python 3.6.8 h9f7ef89_0
setuptools 40.6.3 py36_0
sqlite 3.26.0 he774522_0
vc 14.1 h0510ff6_4
vs2015_runtime 14.15.26706 h3a45250_0
wheel 0.32.3 py36_0
wincertstore 0.2 py36h7fe50ca_0
(myenv)$ conda list --export
# This file may be used to create an environment using:
# $ conda create --name <env> --file <this file>
# platform: win-64
certifi=2018.11.29=py36_0
pip=18.1=py36_0
python=3.6.8=h9f7ef89_0
setuptools=40.6.3=py36_0
sqlite=3.26.0=he774522_0
vc=14.1=h0510ff6_4
vs2015_runtime=14.15.26706=h3a45250_0
wheel=0.32.3=py36_0
wincertstore=0.2=py36h7fe50ca_0
I am eventually interested in a general tool that can capture the current environment of a specified type (conda, virtualenv, venv, global python environment) so as to install it uniformly on another machine. What is the best approach for this?
I've never used conda, but I'd try to use two different tools to manage the python version and your project dependencies.
To install a specific python version, I'd use pyenv: https://github.com/pyenv/pyenv.
pyenv also has a plugin to manage virtualenvs (https://github.com/pyenv/pyenv-virtualenv) that should support Anaconda and Miniconda: https://github.com/pyenv/pyenv-virtualenv#anaconda-and-miniconda
To manage your dependencies (packages you install in your virtual env), you have a few alternatives:
Pip freeze: it doesn't automatically guarantee reproducibility though, because it doesn't have a lock file to pinpoint the exact dependency tree
Poetry: https://github.com/sdispater/poetry (supports a lock file)
Pipenv: https://github.com/pypa/pipenv (supports a lock file)
Hope this is helpful.

Categories