setup.py install etc files in user directory - python

I create a python package. The configuration should be placed in /etc/myapp if installed globally or in ~/.myapp if installed for user only (pip install myapp --user).
I can do the first point with this in the setup.py file:
data_files = [("/etc/myapp", ['application.properties'])
But how can I place this file in ~/.myapp if installed for user only?

Related

Installing shared library with python package not separately

I have successfully built a Python package that uses CMake combined with pybind11 to create a shared object (.so - assuming only Linux usage at the moment) file. The implementation works but I am unable to remove this shared object file using pip uninstall .
My setup command in setup.py file looks like this taken from the pybind/cmake_example repository:
setup(
name='package',
version='0.0.1',
author='-',
author_email='-',
description='A test project using pybind11 and CMake',
long_description='',
ext_modules=[CMakeExtension('packagebindings')],
cmdclass=dict(build_ext=CMakeBuild),
zip_safe=False,
packages=setuptools.find_packages()
)
My CMakeLists.txt file has an install instruction that looks like this:
install(TARGETS packagebindings COMPONENT python LIBRARY DESTINATION ${Python_SITELIB})
To summarise, here are the files that are created when running pip install .:
path/to/site-packages/package/* - removed by pip uninstall package
path/to/site-packages/package-0.0.1.dist-info/* - removed by pip uninstall package
path/to/site-packages/packagebindings.cpython-37m-x86_64-linux-gnu.so - still present after pip uninstall package
I would like to know how make it so that running pip uninstall . removes the .so file.
If a further MRE is required, I can link to a repository.
Your CMake install target seems to place the .so directly into the python installation directory (DESTINATION ${Python_SITE_LIB}). I'm guessing this stops the .so from being registered by Python proper, so it is not removed when uninstalling. I would suggest to make CMake place the .so in a distribution directory, and then add the following option to setup():
data_files = [("installation_bin", ["distribution_bin/library.so"])]
This will let the .so be tracked by the Python package manager. The first string is a directory relative to the installation prefix. The second string is the .so file in your distribution, relative to the setup.py script.

Module gets imported under a syntactically false name space?

I am following along with the O'Riley Head First Python (2nd Edition) Course.
At one point you will create a webapp and deploy it to pythonanywhere (chapter5).
The webapp uses two functions, imported from a module, created earlier.
The module is called vsearch.py. I also created a readme.txt and a setup.py and used setuptools to create a source distribution file using :
python3 setup.py sdist
The code of the setup.py read as follows:
from setuptools import setup
setup(
name = "vsearch",
version = "1.0",
description = "The Head First Python Seach Tools",
author = "HF Python 2e",
author_email = "hfpy2e#gmail.com",
url = "headfirstlabs.com",
py_modules = ["vsearch"],
)
The source distribution file gets created without errors and creates a file called vsearch-1.0.tar.gz
The file then gets uploaded to pythonanywhere and installed via console using:
python3 -m pip install vsearch-1.0.tar.gz --user
Console outputs:
15:36 ~/mysite $ python3 -m pip install vsearch-1.0.tar.gz --user
Looking in links: /usr/share/pip-wheels
Processing ./vsearch-1.0.tar.gz
Building wheels for collected packages: vsearch
Running setup.py bdist_wheel for vsearch ... done
Stored in directory: /home/Mohr/.cache/pip/wheels/85/fd/4e/5302d6f3b92e4057d341443ed5ef0402eb04994663282c12f7
Successfully built vsearch
Installing collected packages: vsearch
Found existing installation: vsearch 1.0
Uninstalling vsearch-1.0:
Successfully uninstalled vsearch-1.0
Successfully installed vsearch-1.0
Now when I try to run my webapp I get the following error:
2020-03-24 16:18:14,592: Error running WSGI application
2020-03-24 16:18:14,592: ModuleNotFoundError: No module named 'vsearch'
2020-03-24 16:18:14,593: File "/var/www/mohr_eu_pythonanywhere_com_wsgi.py", line 16, in <module>
2020-03-24 16:18:14,593: from vsearch4web import app as application # noqa
2020-03-24 16:18:14,593:
2020-03-24 16:18:14,593: File "/home/Mohr/mysite/vsearch4web.py", line 3, in <module>
2020-03-24 16:18:14,593: from vsearch import search4letters
Judging from this error I assume that "vsearch" can not be found because it was installed as "vsearch-1.0". However when I try to change this line to:
from vsearch-1.0 import search4letters
I rightfully get a synthax error since I can not adress modules this way. So what can I do about this? When creating the module in the beginning I added a version number to the setup.py file because according to the lecture it is good practice. Setuptools then automatically creates the source distribution file with the "-1.0" at the end. Also when importing it using the command shown above i automatically gets importet as "vsearch-1.0" which in turn I am unable to reference in my python code because of bad synthax.
Am I doing something wrong? Is there a way to import this under another namespace? Is there a way to reference "vsearch-1.0" in my python code without getting a synthax error?
There are different python3 versions installed on PythonAnywhere. When you install something using python3 -m pip or pip3 you use default python3 that is probably not matching python version setting of your web app. Use python3.7 and pip3.7 or python3.6 and pip3.6 etc. for --user installations to be sure.
pip install --user (with emphasized --user) installed the package into your user directory: /home/Mohr/.local/lib/pythonX.Y/site-packages/.
To run your WSGI application you probably use a virtual environment in which the user-installed modules are not available. To use modules in the venv you have to install everything in the venv. So activate the venv in a terminal and install the module with the venv's pip:
pip install vsearch-1.0.tar.gz

How to use local library as requirement for pip

I have the following directory structure:
/pythonlibraries
/libraryA
setup.py
libraryA/
__init__.py
alib.py
/libraryB
setup.py
libraryB/
__init__.py
blib.py
blib.py:
import libraryA
setup.py for libraryB:
from setuptools import setup
setup(name='libraryB',
version='0.0',
description='',
packages=['libraryB'],
install_requires=["ujson", "/pythonlibraries/libraryA"])
This doesn't work :/
How can I install local dependencies with pip?
Ideally I'd like to do pip install -e /pythonlibraries/libraryB and have it automatically install libraryA from my local disk.
Right now I have to install each local library individually manually...
Did you try to write full path like this
install_requires=["ujson", "/home/user/pythonlibraries/libraryA"])
Because "/" --> this is absolute directory

How to install python package from git repo that has git-lfs content with pip?

I have migrated large files in my git repository to git-lfs. The repository contains the source code of a custom python library. I was able to install it with pip:
pip install git+https://gitserver.com/myrepo.git#branch
Currently (after migration), large files that are stored at lfs, obviously, are out of the installation (there are only links). I have installed git-lfs package from PyPI in the environment but it does not help.
Is there any way to tell pip to fetch git-lfs files while cloning the repo?
I came accross the same issue where I have several internal python packages containing LFS files. Each of them having their own requirements.txt and setup.py files where one is based on pip and doesn't handle well the git lfs and the other is based on setuptools...
My requirements.txt looks like:
pip_package1==0.1
pip_package2==0.2
pip_package3==0.3
git+ssh://git#{url}:{port}/{repo}.git#{branch}
In the setup.py a solution is to catch any git+ssh to not let them into the setup function but forcing their installation through pip:
def install_requires():
reqdir = os.path.dirname(os.path.realpath(__file__))
with open(os.path.join(reqdir, 'requirements.txt'), encoding='utf-8') as f:
all_packages = f.readlines()
packages = [
package
for package in all_packages
if 'git+ssh' not in package
]
manual_pip_packages = [
package
for package in all_packages
if 'git+ssh' in package
]
for package in manual_pip_packages:
subprocess.call([sys.executable, '-m', 'pip', 'install', package])
return packages
Then the pip / git lfs compatibility. What I understood is a pip install on a git repo will simply git clone and then python setup.py [something] so I added in each setup.py within a package containing lfs files a git pull supposing the concerning setup.py is inside a git repository.
def pull_first():
"""This script is in a git directory that can be pulled."""
cwd = os.getcwd()
gitdir = os.path.dirname(os.path.realpath(__file__))
os.chdir(gitdir)
g = git.cmd.Git(gitdir)
try:
g.execute(['git', 'lfs', 'pull'])
except git.exc.GitCommandError:
raise RuntimeError("Make sure git-lfs is installed!")
os.chdir(cwd)
That way I have a setup.py:
from setuptools import setup, find_packages
import os
import subprocess
import sys
try:
import git
except ModuleNotFoundError:
subprocess.call([sys.executable, '-m', 'pip', 'install', 'gitpython'])
import git
def install_requires():
...
def pull_first():
...
pull_first()
setup(name=...,
version=...,
description=...,
long_description=...,
url=...,
author=...,
license=...,
packages=find_packages(),
python_requires=">=3.0",
install_requires=install_requires())
If you have your per-user or system configuration settings properly set, then Git will automatically invoke Git LFS when cloning a repository that uses Git LFS.
The easiest way to do this is to run git lfs install --skip-repo, which will modify your .gitconfig to contain the proper entries. You can verify that your configuration is correct by running git lfs env and making sure that the last three git config options printed are non-empty.
Once that's set up, any time you clone a new repository using Git LFS, the LFS files will be automatically fetched and filled in. If you have an existing repository, you can use git lfs checkout to check out the files manually.

Installing script: check if the installation directory is in the user's path

I have a setup.py to install some command-line tools that accompany
my package:
from setuptools import setup
setup(
...
scripts = ['bin/foo'],
...
)
which is able to 'install' my script just fine.
The question is now, suppose that Python uses a location that is not yet in the user's PATH, for example by
python -m pip install foo --user
How can I alert the user to add this location? When I recently installed twine this way, I got a notification along the lines of "The path './.local/bin' is not in the user's PATH, consider adding it in the appropriate way."

Categories