I have a setup.py script for my package which I install using
python ./setup.py install
What seems to happen is every time I increase the version, the old version is not removed in /usr/local/lib/python2.7/dist-packages so I see multiple versions.
Is there a way to set this up in a way that when a person updates, the old version gets removed?
There is a similar (but not quite) question on SO that asks how to uninstall a package in setup.py but I'm not really looking to uninstall as a separate option. I am looking for a clean 'update' process that removes old versions before installing new ones.
The other option is if I could just cleanly remove the version number from the installed package name, in which case I suppose it would overwrite, but I haven't been successful in doing that. If I remove version, it creates the package name with "0.0" which looks weird.
My setup script:
import io
import os
import sys
from setuptools import setup
#Package meta-data.
NAME = 'my_package'
DESCRIPTION = 'My description'
URL = 'https://github.com/myurl'
EMAIL = 'myemail#gmail.com'
AUTHOR = 'Me'
VERSION = '3.1.12'
setup(name = NAME,
version=VERSION,
py_modules = ['dir.mod1',
'dir.mod2',
]
)
If you want to remove previous versions from your packages then you could use pip in the parent directory of your package. Lets assume your setup.py is in the directory my_package then you can use:
pip install my_package --upgrade
Related
I've read a discussion where a suggestion was to use the requirements.txt inside the setup.py file to ensure the correct installation is available on multiple deployments without having to maintain both a requirements.txt and the list in setup.py.
However, when I'm trying to do an installation via pip install -e ., I get an error:
Obtaining file:///Users/myuser/Documents/myproject
Processing /home/ktietz/src/ci/alabaster_1611921544520/work
ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory:
'/System/Volumes/Data/home/ktietz/src/ci/alabaster_1611921544520/work'
It looks like pip is trying to look for packages that are available on pip (alabaster) on my local machine. Why? What am I missing here? Why isn't pip looking for the required packages on the PyPi server?
I have done it before the other way around, maintaining the setup file and not the requirements file. For the requirements file, just save it as:
*
and for setup, do
from distutils.core import setup
from setuptools import find_packages
try:
from Module.version import __version__
except ModuleNotFoundError:
exec(open("Module/version.py").read())
setup(
name="Package Name",
version=__version__,
packages=find_packages(),
package_data={p: ["*"] for p in find_packages()},
url="",
license="",
install_requires=[
"numpy",
"pandas"
],
python_requires=">=3.8.0",
author="First.Last",
author_email="author#company.com",
description="Description",
)
For reference, my version.py script looks like:
__build_number__ = "_LOCAL_"
__version__ = f"1.0.{__build_number__}"
Which Jenkins is replacing the build_number with a tag
This question consists of two separate questions, for the rather philosopihc choice of how to arrange setup requirements is actually unrelated to the installation error that you are experiencing.
First about the error: It looks like the project you are trying to install depends on another library (alabaster) of which you apparently also did an editable install using pip3 install -e . that points to this directory:
/home/ktietz/src/ci/alabaster_1611921544520/work
What the error tells you is that the directory where the install is supposed to be located does not exist anymore. You should only install your project itself in editable mode, but the dependencies should be installed into a classical system directory, i. e. without the option -e.
To clean up, I would suggest that you do the following:
# clean up references to the broken editable install
pip3 uninstall alabaster
# now do a proper non-editable install
pip3 install alabaster
Concerning the question how to arrange setup requirements, you should primarily use the install_requires and extras_require options of setuptools:
# either in setup.py
setuptools.setup(
install_requires = [
'dep1>=1.2',
'dep2>=2.4.1',
]
)
# or in setup.cfg
[options]
install_requires =
dep1>=1.2
dep2>=2.4.1
[options.extras_require]
extra_deps_a =
dep3
dep4>=4.2.3
extra_deps_b =
dep5>=5.2.1
Optional requirements can be organised in groups. To include such an extra group with the install, you can do pip3 install .[extra_deps_name].
If you wish to define specific dependency environments with exact versions (e. g. for Continuous Integration), you may use requirements.txt files in addition, but the general dependency and version constraint definitions should be done in setup.cfg or setup.py.
When you tell pip to install multiple packages simultaneously, it looks up all child dependencies and installs the most recent version that is allowed by all parent package restrictions.
For example, if packageA requires child>=0.9 and packageB requires child<=1.1, then pip will install child==1.1.
I'm trying to write a script to scan a requirements.txt and pin all unlisted child package versions.
One way to do this would be to simply install everything from a requirements.txt file via pip install -r requirements.txt, then parse the output of pip freeze, and strip out all the packages from my requirements.txt file. Everything left should be child packages and the highest version number that pip calculated.
However, this requires creating a Python virtual environment and installing all packages, which can take a bit a time. Is there any option in pip to do the version calculation without actually installing the packages? If not, is there pip-like tool that provides this?
You can read the package requirements for PyPI hosted packages with the json module:
python3 terminal/script:
import csv
import json
import requests
your_package = str('packageA')
pypi_url = 'https://pypi.python.org/pypi/' + your_package + '/json'
data = requests.get(pypi_url).json()
reqs = data['info']['requires_dist']
print(reqs)
print('Writing requirements for ' + your_package + ' to requirements.csv')
f = open('requirements.csv', 'w')
w = csv.writer(f, delimiter = ',')
w.writerows([x.split(',') for x in reqs])
f.close()
If i get your question correctly you’re trying to install a particular version of a package, one way to do this is after specifying the version in ur requirement.txt file, it can be installed by using pip install then the dependency plus the version
example pip install WhatsApp v3.5
I am enrolled in a machine learning competition and, for some reason, the submission is not a CSV file, but rather the code in Python.
In order to make it run, they asked the participants to create another file called install.py to automatically install all the packages used.
I need to install multiple packages (keras, numpy, etc.).
For each package, I have to use the command os.system. I have no idea what it does, and this is the only information that I have.
Yes, this type of question was asked before, but not with several packages and this specific os.system line.
I don't know if this might work for your specific issues. Give it a go.
import os
packages = ["keras","sklearn"] #etc
for package in packages:
os.system("pip install "+ package) #installs particular package
The way I recommend doing this is to import pip as a module, as follows: (untested)
import pip
def install(package):
if hasattr(pip, 'main'):
pip.main(['install', package])
else:
pip._internal.main(['install', package])
packages = [] #Add your packages as strings
for package in packages:
install(package)
I used this question for most of the code.
You could create a requirements.txt file with all of your package requirements.
import os
os.system("pip install -r requirements.txt")
I've created a python package that is posted to pypi.org. The package consists of a single .py file, which is the same name as the package.
After installing the package via pip (pip install package_name) in a conda or standard python environment I must use the following statement to import a function from this module:
from package_name.package_name import function_x
How can I reorganise my package or adjust my installation command so that I may use import statement
from package_name import function_x
which I have successfully used when I install via python setup.py install.
My setup.py is below
setup(
name = "package_name",
version = "...",
packages=find_packages(exclude=['examples', 'docs', 'build', 'dist']),
)
Change your setup arguments from using packages to using py_modules e.g.
setup(
name = "package_name",
version = "..",
py_modules=['package_name'],
)
This is documented here https://docs.python.org/2/distutils/introduction.html#a-simple-example
I'd like pip to install a dependency that I have on GitHub when the user issues the command to install the original software, also from source on GitHub. Neither of these packages are on PyPi (and never will be).
The user issues the command:
pip -e git+https://github.com/Lewisham/cvsanaly#develop#egg=cvsanaly
This repo has a requirements.txt file, with another dependency on GitHub:
-e git+https://github.com/Lewisham/repositoryhandler#egg=repositoryhandler
What I'd like is a single command that a user can issue to install the original package, have pip find the requirements file, then install the dependency too.
This answer helped me solve the same problem you're talking about.
There doesn't seem to be an easy way for setup.py to use the requirements file directly to define its dependencies, but the same information can be put into the setup.py itself.
I have this requirements.txt:
PIL
-e git://github.com/gabrielgrant/django-ckeditor.git#egg=django-ckeditor
But when installing that requirements.txt's containing package, the requirements are ignored by pip.
This setup.py seems to coerce pip into installing the dependencies (including my github version of django-ckeditor):
from setuptools import setup
setup(
name='django-articles',
...,
install_requires=[
'PIL',
'django-ckeditor>=0.9.3',
],
dependency_links = [
'http://github.com/gabrielgrant/django-ckeditor/tarball/master#egg=django-ckeditor-0.9.3',
]
)
Edit:
This answer also contains some useful information.
Specifying the version as part of the "#egg=..." is required to identify which version of the package is available at the link. Note, however, that if you always want to depend on your latest version, you can set the version to dev in install_requires, dependency_links and the other package's setup.py
Edit: using dev as the version isn't a good idea, as per comments below.
Here's a small script I used to generate install_requires and dependency_links from a requirements file.
import os
import re
def which(program):
"""
Detect whether or not a program is installed.
Thanks to http://stackoverflow.com/a/377028/70191
"""
def is_exe(fpath):
return os.path.exists(fpath) and os.access(fpath, os.X_OK)
fpath, _ = os.path.split(program)
if fpath:
if is_exe(program):
return program
else:
for path in os.environ['PATH'].split(os.pathsep):
exe_file = os.path.join(path, program)
if is_exe(exe_file):
return exe_file
return None
EDITABLE_REQUIREMENT = re.compile(r'^-e (?P<link>(?P<vcs>git|svn|hg|bzr).+#egg=(?P<package>.+)-(?P<version>\d(?:\.\d)*))$')
install_requires = []
dependency_links = []
for requirement in (l.strip() for l in open('requirements')):
match = EDITABLE_REQUIREMENT.match(requirement)
if match:
assert which(match.group('vcs')) is not None, \
"VCS '%(vcs)s' must be installed in order to install %(link)s" % match.groupdict()
install_requires.append("%(package)s==%(version)s" % match.groupdict())
dependency_links.append(match.group('link'))
else:
install_requires.append(requirement)
does this answer your question?
setup(name='application-xpto',
version='1.0',
author='me,me,me',
author_email='xpto#mail.com',
packages=find_packages(),
include_package_data=True,
description='web app',
install_requires=open('app/requirements.txt').readlines(),
)