Python How to install packages to specific directory using setuptools - python

setup.py
from setuptools import setup
setup(
name = "Project",
version = "1.0",
packages = ['Project','Project.project','Project.LOG',\
'Project.reporting','Project.templates',\
],
install_requires = ['django-grappelli==2.3.8','pycairo==1.10.0','django-chart-tools==0.2.1','django-admin-tools==0.4.0'],
package_data = {
'': ['*.html','*.pyd','*.txt','*.gif','*.png','*.jpeg','*.jpg','*.css','*.js','*.py','*.html~','*.sh','*.wsgi'],
},
# metadata for upload to PyPI
author = "Me",
author_email = "me#example.com",
description = "This is an Example Package", )
Now I want all the packages to be installed in dev/workspace in case of windows and /var/www in case of ubuntu OS. And I want all the install_requires to be installed in python/Lib/site-packages.
How can I do this?

I think you really want to have a look at virtualenv. It lets you create your own Python environment where your code and the dependencies get installed to - wherever you want and totally independent from your existing Python installation. I don't see a good reason why you would want to install dependencies in your existing Python installation.

Related

Pycharm cannot find reference `cloud` in "__init__.pyi" for google.cloud, but code does execute

I am using poetry with pyenv to manage dependencies. My pyproject.toml looks as follows:
[tool.poetry]
name = "hello-world"
version = "0.1.0"
description = "None"
authors = ["Hello <foo#bar.com>"]
readme = "README.md"
keywords = []
[tool.poetry.dependencies]
python = ">=3.9,<3.11"
google-cloud = "^0.34.0"
google-cloud-core = "^2.3.2"
google-cloud-bigquery-datatransfer = "^3.7.1"
google-cloud-bigquery = "^3.3.2"
google-cloud-firestore = "^2.5.2"
[[tool.poetry.source]]
name = "ngt-pypi"
url = "link/to/private/package/abc-python/simple/"
default = false
secondary = true
[build-system]
requires = ["poetry-core>=1.0.0"]
build-backend = "poetry.core.masonry.api"
Assuming pyenv is installed (and using version 3.9.6), I install the dependencies by running:
poetry config virtualenvs.in-project true
poetry install
After this, I confirm that in my interpreter I have the latest versions of the google cloud repositories installed.
Nevertheless, when I try to create a code and import bigquery:
from google.cloud import bigquery
I see on the Pycharm editor that it has not been found. The code however does execute, and there are no errors.
What can be done to resolve this issue?
For PyCharm and IntelliJ, you can do the following actions :
Open the menu file/Project Structure
Click on SDKs and the plus button
Click on the Add Python sdk
Click on the Poetry environment
It will detect your current Poetry env and create the sdk in PyCharm
Then in Project settings section, select the sdk created previously
I usually use Pipenv instead of Peoetry but the principle is the same with PyCharm.

Unable to install locally built python package

I recently noticed that I am unable to install my own Python packages. I was getting an error that indicated that a package containing Python modules was invalid. So, I updated my setup.py and removed some elements, this is what I have now:
from setuptools import setup
setup(
name='project',
version='0.3.0',
packages=['project'],
license='GPL',
#zip_safe=False,
#include_package_data=True,
#package_data = { 'package': [ 'README.txt', '*.py' ] },
install_requires=[
'PyYAML >= 3.11',
'logger >= 0.2.0',
],
entry_points={
'console_scripts': ['project = project:main']
},
)
I removed some elements and called the project project. Essentially, within project, I had a package, libraries, with some Python modules. Prior to removing these lines:
#zip_safe=False,
#include_package_data=True,
#package_data = { 'package': [ 'README.txt', '*.py' ] },
... it was not working recently.
Oddly enough though, this setup.py was working as far as I could tell up until a month ago. That said, after commenting those items out and running python setup.py build, I no longer get the error about the package being invalid, but at the same token, I see that nothing gets installed when running pip install dist/project-0.0.1.tar.gz. Inside the file, built by python setup.py sdist, I do see all the files that I would expect to see. They just don't get installed, so I'm effectively missing all of the packages underneath the root folder (which is everything except init).
What am I missing here?
EDIT:
The solution was:
packages=find_packages(),
The hackish solution for me was to do this:
packages=['project', 'project/libraries', 'project/system', 'project/services'],
For whatever reason, packages was no longer working recursively.
As soon as I did that, voila, it worked. I'll probably circle back to this later as I'm curious what changed.

Building and distributing python moduel using rpm

I am trying to build and distribute rpm package of python module for centos. I have followed following steps
created virtualenv and installed requires
in module added setup.py with install_requires.
then using python2.7 from virtualenv build package
../env/bin/python2.7 setup.py bdist_rpm
Now I got src, no-arch and tar-gz files in 'dist' folder.
foo-0.1-1.noarch.rpm, foo-0.1-1.src.rpm, foo-0.1.tar.gz
I tried to install package src-rpm using 'sudo yum install foo-0.1-1.src.rpm',
got error something like wrong architecture
Then I tried to install package no-arch, 'sudo yum install foo-0.1-1.noarch.rpm' it works smoothly.
But after running script, it gave some import error. here I expect to download that module automatically.
The last thing is I am using some third party library which is not on pip.
So I want to whole setup using virtualenv with required modules. So after installing rpm, user can run script directly instead of installing third party libs separately and explicitly.
Some above steps may sounds wrong, as I am new to this stuff.
Following is code in setup.py
from setuptools import setup, find_packages
setup(
name = "foo",
version = "0.1",
packages = find_packages(),
scripts = ['foo/bar.py', ],
# Project uses reStructuredText, so ensure that the docutils get
# installed or upgraded on the target machine
install_requires = ['PyYAML', 'pyOpenSSL', 'pycrypto', 'privatelib1,'privatelib2', 'zope.interface'],
package_data = {
# If any package contains *.txt or *.rst files, include them:
'': ['*.txt', '*.rst'],
# And include any *.msg files found in the 'billing' package, too:
'foo': ['*.msg'],
},
# metadata for upload to PyPI
author = "foo bar",
description = "foo bar",
license = "",
keywords = "foo bar",
# could also include long_description, download_url, classifiers, etc.
)
Also I am using shebang in script as,
#!/usr/bin/env python2.7
Note:
I have multiple python setups. 2.6 and 2.7
By default 'python' commands gives 2.6
while command 'python2.7' gives python2.7
output of `'rpm -qp foo-0.1-1.noarch.rpm --requires' =>
`/usr/bin/python
python(abi) = 2.6
rpmlib(CompressedFileNames) <= 3.0.4-1
rpmlib(PayloadFilesHavePrefix) <= 4.0-1
When i install pakcage. script's shebang line (which is now '/usr/bin/bar.py') is getting changed to /usr/bin/python' But I exclusively want to run script on python2.7.
Thanks in advance

Changing console_script entry point interpreter for packaging

I'm packaging some python packages using a well known third party packaging system, and I'm encountering an issue with the way entry points are created.
When I install an entry point on my machine, the entry point will contain a shebang pointed at whatever python interpreter, like so:
in /home/me/development/test/setup.py
from setuptools import setup
setup(
entry_points={
"console_scripts": [
'some-entry-point = test:main',
]
}
)
in /home/me/.virtualenvs/test/bin/some-entry-point:
#!/home/me/.virtualenvs/test/bin/python
# EASY-INSTALL-ENTRY-SCRIPT: 'test==1.0.0','console_scripts','some-entry-point'
__requires__ = 'test==1.0.0'
import sys
from pkg_resources import load_entry_point
sys.exit(
load_entry_point('test==1.0.0', 'console_scripts', 'some-entry-point')()
)
As you can see, the entry point boilerplate contains a hard-coded path to the python interpreter that's in the virtual environment that I'm using to create my third party package.
Installing this entry point using my third-party packaging system results in the entry point being installed on the machine. However, with this hard-coded reference to a python interpreter which doesn't exist on the target machine, the user must run python /path/to/some-entry-point.
The shebang makes this pretty unportable. (which isn't a design goal of virtualenv for sure; but I just need to MAKE it a little more portable here.)
I'd rather not resort to crazed find/xargs/sed commands. (Although that's my fallback.)
Is there some way that I can change the interpreter path after the shebang using setuptools flags or configs?
You can customize the console_scripts' shebang line by setting 'sys.executable' (learned this from a debian bug report). That is to say...
sys.executable = '/bin/custom_python'
setup(
entry_points={
'console_scripts': [
... etc...
]
}
)
Better though would be to include the 'execute' argument when building...
setup(
entry_points={
'console_scripts': [
... etc...
]
},
options={
'build_scripts': {
'executable': '/bin/custom_python',
},
}
)
For future reference for someone who wants to do this at runtime without modifying the setup.py, it's possible to pass the interpreter path to setup.py build via pip with:
$ ./venv/bin/pip install --global-option=build \
--global-option='--executable=/bin/custom_python' .
...
$ head -1 ./venv/bin/some-entry-point
#!/bin/custom_python
Simply change the shebang of your setup.py to match the python you want your entry points to use:
#!/bin/custom_python
(I tried #damian answer but not working for me, maybe the setuptools version on Debian Jessie is too old)

Optional dependencies in distutils / pip

When installing my python package, I want to be able to tell the user about various optional dependencies. Ideally I would also like to print out a message about these optional requirements and what each of them do.
I haven't seen anything yet in the docs of either pip or docutils. Do tools these support optional dependencies?
These are called extras, here is how to use them in your setup.py, setup.cfg, or pyproject.toml.
The base support is in pkg_resources. You need to enable distribute in your setup.py. pip will also understand them:
pip install 'package[extras]'
Yes, at stated by #Tobu and explained here. In your setup.py file you can add a line like this:
extras_require = {
'full': ['matplotlib', 'tensorflow', 'numpy', 'tikzplotlib']
}
I have an example of this line here.
Now you can either install via PIP basic/vanilla package like pip install package_name or the package with all the optional dependencies like pip install package_name[full]
Where package_name is the name of your package and full is because we put "full" in the extras_require dictionary but it depends on what you put as a name.
If someone is interested in how to code a library that can work with or without a package I recommend this answer
Since PEP-621, this information is better placed in the pyproject.toml rather than setup.py. Here's the relevant specification from PEP 621. Here's an example snippet from a pyproject.toml (credit to #GwynBleidD):
[project.optional-dependencies]
test = [
"pytest < 5.0.0",
"pytest-cov[all]"
]
lint = [
"black",
"flake8"
]
ci = [
"pytest < 5.0.0",
"pytest-cov[all]",
"black",
"flake8"
]
A more complete example is found in the PEP

Categories