I am trying to build my own Python package (installable by pip) using the twine package. This is all going well right up until the point where I try to pip install my actual package (so after uploading to PyPi).
So I first run:
python3 setup.py sdist bdist_wheel
In which my setup.py install_requires list looks like this:
install_requires=[
'jupyter_kernel_gateway==2.4.0',
'pandas==1.0.2',
'numpy==1.18.1',
'azure-storage-blob==2.0.1',
'azure-datalake-store==0.0.48',
'psycopg2-binary==2.8.4',
'xlsxwriter==1.2.6',
'SQLAlchemy==1.3.12',
'geoalchemy2==0.6.3',
'tabulate==0.8.2',
'pyproj==1.9.6',
'geopandas==0.4.0',
'contextily==0.99.0',
'matplotlib==3.0.2',
'humanize==0.5.1',
'ujson==1.35',
'singleton-decorator==1.0.0',
'dataclasses==0.6',
'xlrd==1.2.0'],
In my understanding, these install_requires would be installed by pip when installing my own package.
After this I run
python3 -m twine upload --repository testpypi dist/*
To actually upload my package to PyPi. However, when pip installing my package, I get errors that say there are no versions that satisfy the requirements for a lot of the listed requirements. E.g.: ERROR: Could not find a version that satisfies the requirement psycopg2-binary==2.8.4
When I manually install these packages (e.g. pip install psycopg2-binary==2.8.4), they do get installed.
Is there any way to make the pip install of my package actually install the install_requires requirement list succesfully?
You didn't show how your pip install-ing your package, but I'm guessing you're using something like:
pip install your_project --index-url https://test.pypi.org/simple
The issue is that TestPyPI doesn't contain copies of your dependencies that exist on PyPI. For example:
Exists: https://pypi.org/project/psycopg2-binary/2.8.4/
Does not exist: https://test.pypi.org/project/psycopg2-binary/2.8.4/
You can configure pip to fall back on TestPyPI when a package is missing instead by specifying --extra-index-url instead:
pip install your_project --extra-index-url https://test.pypi.org/simple
I build a pypiserver in my computer and uploaded a project,then i tried to install this project in another computer,but i uploaded source of the project without dependencies.
so when i install this project pip try to install all dependencies from my own server.
then:Error: Not Found for url: http://xxx.xxx.xxx.xxx/simple/gunicorn/
so,is there some way to specify the project in my own pypiserver and dependencies to two diffrent pypi source when i use pip install -i my-pypi-server?
You can specify --extra-index-url when you run pip install so that the project dependencies can be resolved outside your local repository:
pip install -i my-pypi-server --extra-index-url pypi.douban.com/simple <library>
I have a python package being built, which also has two options for extra_includes:
name='mypackage',
extras_require={
'option_one': ['dep1'],
'option_two': ['dep2']
}
I only have access to the tar.gz built package which means I cannot simply do:
pip install mypackage[option_two]
Previously, I was directly installing this directly from the tar.gz:
pip install path/to/mypackage.tar.gz
However, this no longer allows me to specify the extra_require like:
pip install path/to/mypackage.tar.gz[option_two] # this is wrong
I could expand the package and do a manual install from the directory but is there a way to more directly install from the tar.gz itself?
From the pip changelog:
7.0.0 (2015-05-21)
Allowing using extras when installing from a file path without requiring the use of an editable (PR #2785).
Some Linux distros bundle very old versions of pip when using the system packages for virtualenv or venv. Update pip after creating your env.
pip install -U pip
pip install package.tar.gz[name]
I have some django packages like django-oscar. I need to install it with pip and then edit code & revise.
I'm tried to install it through setup.py deploy and to make .egg-info. Then I understand that pip doesn't have feature to install packages through .egg-info.
I also tried to install package from local directory using -e /path/to/package, but pip doesn't allow me install from directory. It message me: --editable=src/django-oscar-master/oscar/ should be formatted with svn+URL, git+URL, hg+URL or bzr+URL
Then I'm tried to install through pip install django-oscar --no-index --find-links=file://src/django-oscar-master/ and similar commands. It always message me: Could not find any downloads that satisfy the requirement django-oscar
How to install package not in site-packages of virtualenv and put command in requirements.txt that will install this package from local dir?
This isn't really what pip was designed to do. You should post your version of django-oscar to github, then reference that in your pip requirements.txt
Or if you don't want to have it hosted remote you might as well just include it in your project directory as you would a Django app you are making.
If I want to use the pip command to download a package (and its dependencies), but keep all of the zipped files that get downloaded (say, django-socialregistration.tar.gz) - is there a way to do that?
I've tried various command-line options, but it always seems to unpack and delete the zipfile - or it gets the zipfile, but only for the original package, not the dependencies.
pip install --download is deprecated. Starting from version 8.0.0 you should use pip download command:
pip download <package-name>
The --download-cache option should do what you want:
pip install --download-cache="/pth/to/downloaded/files" package
However, when I tested this, the main package downloaded, saved and installed ok, but the the dependencies were saved with their full url path as the name - a bit annoying, but all the tar.gz files were there.
The --download option downloads the main package and its dependencies and does not install any of them. (Note that prior to version 1.1 the --download option did not download dependencies.)
pip install package --download="/pth/to/downloaded/files"
The pip documentation outlines using --download for fast & local installs.
I always do this to download the packages:
pip install --download /path/to/download/to_packagename
OR
pip install --download=/path/to/packages/downloaded -r requirements.txt
And when I want to install all of those libraries I just downloaded, I do this:
pip install --no-index --find-links="/path/to/downloaded/dependencies" packagename
OR
pip install --no-index --find-links="/path/to/downloaded/packages" -r requirements.txt
Update
Also, to get all the packages installed on one system, you can export them all to requirement.txt that will be used to intall them on another system, we do this:
pip freeze > requirement.txt
Then, the requirement.txt can be used as above for download, or do this to install them from requirement.txt:
pip install -r requirement.txt
REFERENCE: pip installer
pip wheel is another option you should consider:
pip wheel mypackage -w .\outputdir
It will download packages and their dependencies to a directory (current working directory by default), but it performs the additional step of converting any source packages to wheels.
It conveniently supports requirements files:
pip wheel -r requirements.txt -w .\outputdir
Add the --no-deps argument if you only want the specifically requested packages:
pip wheel mypackage -w .\outputdir --no-deps
Use pip download <package1 package2 package n> to download all the packages including dependencies
Use pip install --no-index --find-links . <package1 package2 package n> to install all the packages including dependencies.
It gets all the files from CWD.
It will not download anything
In version 7.1.2 pip downloads the wheel of a package (if available) with the following:
pip install package -d /path/to/downloaded/file
The following downloads a source distribution:
pip install package -d /path/to/downloaded/file --no-binary :all:
These download the dependencies as well, if pip is aware of them (e.g., if pip show package lists them).
Update
As noted by Anton Khodak, pip download command is preferred since version 8. In the above examples this means that /path/to/downloaded/file needs to be given with option -d, so replacing install with download works.
installing python packages offline
For windows users:
To download into a file
open your cmd and folow this:
cd <*the file-path where you want to save it*>
pip download <*package name*>
the package and the dependencies will be downloaded in the current working directory.
To install from the current working directory:
set your folder where you downloaded as the cwd then follow these:
pip install <*the package name which is downloded as .whl*> --no-index --find-links <*the file locaation where the files are downloaded*>
this will search for dependencies in that location.
All the answers mentioned in this thread assume that the packages will be downloaded on the same OS configuration as the target OS where it has to be installed.
In my personal experience i was using windows as my work machine and had to download packages for linux environment and have seen people doing vice versa as well. I had done some extensive googling, and found sodim.dev.
All i had to do was upload requirements.txt file and select the environment configuration like OS and python version and it gives out a csv with download url, source code url etc
I guess in the backend this app spins up the OS VM as requested and installs that particular python version and then generates the report, because it does take about 15-20 minutes for 30-50 packages.
P.S.: I work in an offline environment, where security is of very high concern, and downloading packages are not that frequent. We whitelist source code and download urls for each individual requests and then after running some appsec tools, we approve/reject the source code to be downloaded.
I would prefer (RHEL) - pip download package==version --no-deps --no-binary=:all: