How can I move my Python2.6 site-packages into Python2.7? - python

I just ran an update on ArchLinux which gave me Python3 and Python2.7.
Before this update, I was using Python2.6. The modules I have installed reside in /usr/lib/python2.6/site-package. I now want to use Python2.7 and remove Python2.6.
How can I move my Python2.6 modules into Python2.7 ?
Is it as simple as doing mv /usr/lib/python2.6/site-packages/* /usr/lib/python2.7/site-packages ?

Your question is really, "How can I get the packages I have in python 2.6 into my [new] python 2.7 configuration? Would copying the files work?"
I would recommend installing the packages into 2.7 the same way you did your 2.6 packages. I would not recommend you copy the files.
Reasonable ways to install the files are:
easy_install
Get easy_install like this: wget http://python-distribute.org/distribute_setup.py && sudo python ./distribute_setup.py
pip install
Get pip like this: sudo easy_install pip
apt-get install
wget and untar

Not a complete answer: It is not as simple as a mv. The files are byte compiled into .pyc files which are specific to python versions. So at the very least you'd have to regenerate the .pyc files. (Removing them should be sufficient, too.) Regenerating can be done using compileall.py.
Most distributions offer a saner way to upgrade Python modules than manual fiddling like this, so maybe someone can else can give the Arch specific part of the answer?

The clean way would be re-installing. However, for many if not most of pure python packages the mv approach would work

You might want to 'easy_install yolk', which can be invoked as 'yolk -l' to give you an easy-to-read list of all the installed packages.

Try something like this:
#!/usr/bin/env python
import os
import os.path
import subprocess
import sys
import tempfile
def distributions(path):
# Extrapolate from paths which distributions presently exist.
(parent, child) = os.path.split(path)
while child is not '' and not child.startswith('python'):
(parent, child) = os.path.split(parent)
if len(child) > 0:
rel = os.path.relpath(path, os.path.join(parent, child))
ret = []
for distro in os.listdir(parent):
if distro.startswith('python'):
dir = os.path.join(os.path.join(parent, distro), rel)
if os.path.isdir(dir):
ret.append((distro, dir))
ret.sort()
return ret
return []
def packages(dir):
return [pkg.split('-')[0] for pkg in os.listdir(dir)]
def migrate(old, new):
print 'moving all packages found in ' + old[0] + ' (' + old[1] + ') to ' + new[0] + ' (' + new[1] + ')'
f = tempfile.TemporaryFile(mode = 'w+')
result = subprocess.call(
['which', 'easy_install'], stdout = f, stderr = subprocess.PIPE)
f.seek(0)
easy_install = f.readline().strip()
f.close()
if os.path.isfile(easy_install):
pkgs = packages(old[1])
success = []
failure = []
for pkg in pkgs:
# Invoke easy_install on the package
print 'installing "' + pkg + '" for ' + new[0]
result = subprocess.call(
[new[0], easy_install, pkg])
if result != 0:
failure.append(pkg)
print 'failed'
else:
success.append(pkg)
print 'success'
print str(len(success)) + ' of ' + str(len(pkgs)) + ' succeeded'
else:
print 'Unable to locate easy_install script'
if __name__ == '__main__':
package_path = sys.path[-1]
distros = distributions(package_path)
migrate(distros[0], distros[1])
list(package_path)

Related

Generating list of modules from all .py files and installing without specifying module version

I have hundreds of python files and I need to install all the existing modules, I thought I'd create a file with all the module names that are imported in the files:
import os
def remove_duplicates():
lines = open('all_modules.txt', 'r').readlines()
lines_set = set(lines)
out = open('all_modules.txt', 'w')
for line in lines_set:
out.write(line)
def main():
fileDir = r"C:\Users\Computador\Desktop\tests vscode"
fileExt = r".py"
py_file_list = [_ for _ in os.listdir(fileDir) if _.endswith(fileExt)]
with open('all_modules.txt', 'w+', newline='', encoding='UTF-8') as f:
for py_file in py_file_list:
with open(py_file) as stacktest:
stacktest_list = [line.strip() for line in stacktest.readlines()]
for line in stacktest_list:
if 'import ' in line and "'" not in line:
after_import = line.split('import ',1)[1]
before_space = after_import.split(' ')[0]
before_comma = before_space.split(',')[0]
f.write(before_comma + '\n')
remove_duplicates()
if __name__ == '__main__':
main()
Example result in all_modules.txt:
pandas
time
dataframe_image
webbrowser
num2words
glob
Pool
schedule
move
csv
os
feedparser
But to install with pip install using requirements.txt it is necessary to have the version of the module I want to install, in this case, the computer is completely clean and I want the latest version of all modules.
Is there a way to install these modules at once without having to write one by one manually?
But to install with pip install using requirements.txt it is necessary to have the version of the module I want to install
That's not true. You can take the file you have generated and run pip install -r all_modules.txt
Pip will go through each name in the file and attempt to install the latest version available (that might not be the same version you used when writing the code originally - things might not work, but it will at least install the package).
The other problems you will likely encounter are:
the names in your file are extracted from import statements, but the import name of a package does not necessarily match the name of the package in PyPI that you need to pip install
you will find some imported packages that do not need to be pip installed, such as packages from Python stdlib, or from other modules in your own project... these will give an error when you try to pip install them - or worse you may install a different package of the same name that exists in PyPI by coincidence

Install all the site packages of Django rest framework project at once [duplicate]

What is the most efficient way to list all dependencies required to deploy a working project elsewhere (on a different OS, say)?
Python 2.7, Windows dev environment, not using a virtualenv per project, but a global dev environment, installing libraries as needed, happily hopping from one project to the next.
I've kept track of most (not sure all) libraries I had to install for a given project. I have not kept track of any sub-dependencies that came auto-installed with them. Doing pip freeze lists both, plus all the other libraries that were ever installed.
Is there a way to list what you need to install, no more, no less, to deploy the project?
EDIT In view of the answers below, some clarification. My project consists of a bunch of modules (that I wrote), each with a bunch of imports. Should I just copy-paste all the imports from all modules into a single file, sort eliminating duplicates, and throw out all from the standard library (and how do I know they are)? Or is there a better way? That's the question.
pipreqs solves the problem. It generates project-level requirement.txt file.
Install pipreqs: pip install pipreqs
Generate project-level requirement.txt file: pipreqs /path/to/your/project/
requirements file would be saved in /path/to/your/project/requirements.txt
If you want to read more advantages of pipreqs over pip freeze, read it from here
Scan your import statements. Chances are you only import things you explicitly wanted to import, and not the dependencies.
Make a list like the one pip freeze does, then create and activate a virtualenv.
Do pip install -r your_list, and try to run your code in that virtualenv. Heed any ImportError exceptions, match them to packages, and add to your list. Repeat until your code runs without problems.
Now you have a list to feed to pip install on your deployment site.
This is extremely manual, but requires no external tools, and forces you to make sure that your code runs. (Running your test suite as a check is great but not sufficient.)
On your terminal type:
pip install pipdeptree
cd <your project root>
pipdeptree
I found the answers here didn't work too well for me as I only wanted the imports from inside our repository (eg. import requests I don't need, but from my.module.x import y I do need).
I noticed that PyInstaller had perfectly good functionality for this though. I did a bit of digging and managed to find their dependency graph code, then just created a function to do what I wanted with a bit of trial and error. I made a gist here since I'll likely need it again in the future, but here is the code:
import os
from PyInstaller.depend.analysis import initialize_modgraph
def get_import_dependencies(*scripts):
"""Get a list of all imports required.
Args: script filenames.
Returns: list of imports
"""
script_nodes = []
scripts = set(map(os.path.abspath, scripts))
# Process the scripts and build the map of imports
graph = initialize_modgraph()
for script in scripts:
graph.run_script(script)
for node in graph.nodes():
if node.filename in scripts:
script_nodes.append(node)
# Search the imports to find what is in use
dependency_nodes = set()
def search_dependencies(node):
for reference in graph.getReferences(node):
if reference not in dependency_nodes:
dependency_nodes.add(reference)
search_dependencies(reference)
for script_node in script_nodes:
search_dependencies(script_node)
return list(sorted(dependency_nodes))
if __name__ == '__main__':
# Show the PyInstaller imports used in this file
for node in get_import_dependencies(__file__):
if node.identifier.split('.')[0] == 'PyInstaller':
print(node)
All the node types are defined in PyInstaller.lib.modulegraph.modulegraph, such as SourceModule, MissingModule, Package and BuiltinModule. These will come in useful when performing checks.
Each of these has an identifier (path.to.my.module), and depending on the node type, it may have a filename (C:/path/to/my/module/__init__.py), and packagepath (['C:/path/to/my/module']).
I can't really post any extra code as it is quite specific to our setup with using pyarmor with PyInstaller, I can happily say it works flawlessly so far though.
You could use the findpydeps module I wrote:
Install it via pip: pip install findpydeps
If you have a main file: findpydeps -l -i path/to/main.py (the -l will follow the imports in the file)
Or your code is in a folder: findpydeps -i path/to/folder
Most importantly, the output is pip-friendly:
do findpydeps -i . > requirements.txt (assuming . is your project's directory)
then pip install -r requirements.txt
You can of course search through multiple directories and files for requirements, like: findpydeps -i path/to/file1.py path/to/folder path/to/file2.py, etc.
By default, it will remove the packages that are in the python standard library, as well as local packages. Refer to the -r/--removal-policy argument for more info.
If you don't want imports that are done in if, try/except or with blocks, just add --no-blocks. The same goes for functions with --no-functions.
Anyway, you got the idea: there are a lot of options (most of them are not discussed here). Refer the findpydeps -h output for more help!
The way to do this is analyze your imports. To automate that, check out Snakefood. Then you can make a requirements.txt file and get on your way to using virtualenv.
The following will list the dependencies, excluding modules from the standard library:
sfood -fuq package.py | sfood-filter-stdlib | sfood-target-files
Related questions:
Get a list of python packages used by a Django Project
list python package dependencies without loading them?
You can simply use pipreqs, install it using:
pip install pipreqs
Then, type: pipreqs . on the files directory.
A text file named requirements will be created for you, which looks like this:
numpy==1.21.1
pytest==6.2.4
matplotlib==3.4.2
PySide2==5.15.2
I would just run something like this:
import importlib
import os
import pathlib
import re
import sys, chardet
from sty import fg
sys.setrecursionlimit(100000000)
dependenciesPaths = list()
dependenciesNames = list()
paths = sys.path
red = fg(255, 0, 0)
green = fg(0, 200, 0)
end = fg.rs
def main(path):
try:
print("Finding imports in '" + path + "':")
file = open(path)
contents = file.read()
wordArray = re.split(" |\n", contents)
currentList = list()
nextPaths = list()
skipWord = -1
for wordNumb in range(len(wordArray)):
word = wordArray[wordNumb]
if wordNumb == skipWord:
continue
elif word == "from":
currentList.append(wordArray[wordNumb + 1])
skipWord = wordNumb + 2
elif word == "import":
currentList.append(wordArray[wordNumb + 1])
currentList = set(currentList)
for i in currentList:
print(i)
print("Found imports in '" + path + "'")
print("Finding paths for imports in '" + path + "':")
currentList2 = currentList.copy()
currentList = list()
for i in currentList2:
if i in dependenciesNames:
print(i, "already found")
else:
dependenciesNames.append(i)
try:
fileInfo = importlib.machinery.PathFinder().find_spec(i)
print(fileInfo.origin)
dependenciesPaths.append(fileInfo.origin)
currentList.append(fileInfo.origin)
except AttributeError as e:
print(e)
print(i)
print(importlib.machinery.PathFinder().find_spec(i))
# print(red, "Odd noneType import called ", i, " in path ", path, end, sep='')
print("Found paths for imports in '" + path + "'")
for fileInfo in currentList:
main(fileInfo)
except Exception as e:
print(e)
if __name__ == "__main__":
# args
args = sys.argv
print(args)
if len(args) == 2:
p = args[1]
elif len(args) == 3:
p = args[1]
open(args[2], "a").close()
sys.stdout = open(args[2], "w")
else:
print('Usage')
print('PyDependencies <InputFile>')
print('PyDependencies <InputFile> <OutputFile')
sys.exit(2)
if not os.path.exists(p):
print(red, "Path '" + p + "' is not a real path", end, sep='')
elif os.path.isdir(p):
print(red, "Path '" + p + "' is a directory, not a file", end, sep='')
elif "".join(pathlib.Path(p).suffixes) != ".py":
print(red, "Path '" + p + "' is not a python file", end, sep='')
else:
print(green, "Path '" + p + "' is a valid python file", end, sep='')
main(p)
deps = set(dependenciesNames)
print(deps)
sys.exit()
If you're using an Anaconda virtual environment, you can run the below command inside the environment to create a txt file of all the dependencies used in the project.
conda list -e > requirements.txt
This answer is to help someone list all dependencies with versions from the Python script itself. This will list all dependencies in the user virtual environment.
from pip._internal.operations import freeze
x = freeze.freeze()
for dependency in x:
print(dependency)
for this you need to install pip as a dependency. Use the following command to install pip dependency.
pip install pip
The print output would look like the following.
certifi==2020.12.5
chardet==4.0.0
idna==2.10
numpy==1.20.3
oauthlib==3.1.0
pandas==1.2.4
pip==21.1.2
python-dateutil==2.8.1
pytz==2021.1
requests==2.25.1
requests-oauthlib==1.3.0
setuptools==41.2.0
six==1.16.0
urllib3==1.26.4

Create requirements.txt with specific dependencies [duplicate]

What is the most efficient way to list all dependencies required to deploy a working project elsewhere (on a different OS, say)?
Python 2.7, Windows dev environment, not using a virtualenv per project, but a global dev environment, installing libraries as needed, happily hopping from one project to the next.
I've kept track of most (not sure all) libraries I had to install for a given project. I have not kept track of any sub-dependencies that came auto-installed with them. Doing pip freeze lists both, plus all the other libraries that were ever installed.
Is there a way to list what you need to install, no more, no less, to deploy the project?
EDIT In view of the answers below, some clarification. My project consists of a bunch of modules (that I wrote), each with a bunch of imports. Should I just copy-paste all the imports from all modules into a single file, sort eliminating duplicates, and throw out all from the standard library (and how do I know they are)? Or is there a better way? That's the question.
pipreqs solves the problem. It generates project-level requirement.txt file.
Install pipreqs: pip install pipreqs
Generate project-level requirement.txt file: pipreqs /path/to/your/project/
requirements file would be saved in /path/to/your/project/requirements.txt
If you want to read more advantages of pipreqs over pip freeze, read it from here
Scan your import statements. Chances are you only import things you explicitly wanted to import, and not the dependencies.
Make a list like the one pip freeze does, then create and activate a virtualenv.
Do pip install -r your_list, and try to run your code in that virtualenv. Heed any ImportError exceptions, match them to packages, and add to your list. Repeat until your code runs without problems.
Now you have a list to feed to pip install on your deployment site.
This is extremely manual, but requires no external tools, and forces you to make sure that your code runs. (Running your test suite as a check is great but not sufficient.)
On your terminal type:
pip install pipdeptree
cd <your project root>
pipdeptree
I found the answers here didn't work too well for me as I only wanted the imports from inside our repository (eg. import requests I don't need, but from my.module.x import y I do need).
I noticed that PyInstaller had perfectly good functionality for this though. I did a bit of digging and managed to find their dependency graph code, then just created a function to do what I wanted with a bit of trial and error. I made a gist here since I'll likely need it again in the future, but here is the code:
import os
from PyInstaller.depend.analysis import initialize_modgraph
def get_import_dependencies(*scripts):
"""Get a list of all imports required.
Args: script filenames.
Returns: list of imports
"""
script_nodes = []
scripts = set(map(os.path.abspath, scripts))
# Process the scripts and build the map of imports
graph = initialize_modgraph()
for script in scripts:
graph.run_script(script)
for node in graph.nodes():
if node.filename in scripts:
script_nodes.append(node)
# Search the imports to find what is in use
dependency_nodes = set()
def search_dependencies(node):
for reference in graph.getReferences(node):
if reference not in dependency_nodes:
dependency_nodes.add(reference)
search_dependencies(reference)
for script_node in script_nodes:
search_dependencies(script_node)
return list(sorted(dependency_nodes))
if __name__ == '__main__':
# Show the PyInstaller imports used in this file
for node in get_import_dependencies(__file__):
if node.identifier.split('.')[0] == 'PyInstaller':
print(node)
All the node types are defined in PyInstaller.lib.modulegraph.modulegraph, such as SourceModule, MissingModule, Package and BuiltinModule. These will come in useful when performing checks.
Each of these has an identifier (path.to.my.module), and depending on the node type, it may have a filename (C:/path/to/my/module/__init__.py), and packagepath (['C:/path/to/my/module']).
I can't really post any extra code as it is quite specific to our setup with using pyarmor with PyInstaller, I can happily say it works flawlessly so far though.
You could use the findpydeps module I wrote:
Install it via pip: pip install findpydeps
If you have a main file: findpydeps -l -i path/to/main.py (the -l will follow the imports in the file)
Or your code is in a folder: findpydeps -i path/to/folder
Most importantly, the output is pip-friendly:
do findpydeps -i . > requirements.txt (assuming . is your project's directory)
then pip install -r requirements.txt
You can of course search through multiple directories and files for requirements, like: findpydeps -i path/to/file1.py path/to/folder path/to/file2.py, etc.
By default, it will remove the packages that are in the python standard library, as well as local packages. Refer to the -r/--removal-policy argument for more info.
If you don't want imports that are done in if, try/except or with blocks, just add --no-blocks. The same goes for functions with --no-functions.
Anyway, you got the idea: there are a lot of options (most of them are not discussed here). Refer the findpydeps -h output for more help!
The way to do this is analyze your imports. To automate that, check out Snakefood. Then you can make a requirements.txt file and get on your way to using virtualenv.
The following will list the dependencies, excluding modules from the standard library:
sfood -fuq package.py | sfood-filter-stdlib | sfood-target-files
Related questions:
Get a list of python packages used by a Django Project
list python package dependencies without loading them?
You can simply use pipreqs, install it using:
pip install pipreqs
Then, type: pipreqs . on the files directory.
A text file named requirements will be created for you, which looks like this:
numpy==1.21.1
pytest==6.2.4
matplotlib==3.4.2
PySide2==5.15.2
I would just run something like this:
import importlib
import os
import pathlib
import re
import sys, chardet
from sty import fg
sys.setrecursionlimit(100000000)
dependenciesPaths = list()
dependenciesNames = list()
paths = sys.path
red = fg(255, 0, 0)
green = fg(0, 200, 0)
end = fg.rs
def main(path):
try:
print("Finding imports in '" + path + "':")
file = open(path)
contents = file.read()
wordArray = re.split(" |\n", contents)
currentList = list()
nextPaths = list()
skipWord = -1
for wordNumb in range(len(wordArray)):
word = wordArray[wordNumb]
if wordNumb == skipWord:
continue
elif word == "from":
currentList.append(wordArray[wordNumb + 1])
skipWord = wordNumb + 2
elif word == "import":
currentList.append(wordArray[wordNumb + 1])
currentList = set(currentList)
for i in currentList:
print(i)
print("Found imports in '" + path + "'")
print("Finding paths for imports in '" + path + "':")
currentList2 = currentList.copy()
currentList = list()
for i in currentList2:
if i in dependenciesNames:
print(i, "already found")
else:
dependenciesNames.append(i)
try:
fileInfo = importlib.machinery.PathFinder().find_spec(i)
print(fileInfo.origin)
dependenciesPaths.append(fileInfo.origin)
currentList.append(fileInfo.origin)
except AttributeError as e:
print(e)
print(i)
print(importlib.machinery.PathFinder().find_spec(i))
# print(red, "Odd noneType import called ", i, " in path ", path, end, sep='')
print("Found paths for imports in '" + path + "'")
for fileInfo in currentList:
main(fileInfo)
except Exception as e:
print(e)
if __name__ == "__main__":
# args
args = sys.argv
print(args)
if len(args) == 2:
p = args[1]
elif len(args) == 3:
p = args[1]
open(args[2], "a").close()
sys.stdout = open(args[2], "w")
else:
print('Usage')
print('PyDependencies <InputFile>')
print('PyDependencies <InputFile> <OutputFile')
sys.exit(2)
if not os.path.exists(p):
print(red, "Path '" + p + "' is not a real path", end, sep='')
elif os.path.isdir(p):
print(red, "Path '" + p + "' is a directory, not a file", end, sep='')
elif "".join(pathlib.Path(p).suffixes) != ".py":
print(red, "Path '" + p + "' is not a python file", end, sep='')
else:
print(green, "Path '" + p + "' is a valid python file", end, sep='')
main(p)
deps = set(dependenciesNames)
print(deps)
sys.exit()
If you're using an Anaconda virtual environment, you can run the below command inside the environment to create a txt file of all the dependencies used in the project.
conda list -e > requirements.txt
This answer is to help someone list all dependencies with versions from the Python script itself. This will list all dependencies in the user virtual environment.
from pip._internal.operations import freeze
x = freeze.freeze()
for dependency in x:
print(dependency)
for this you need to install pip as a dependency. Use the following command to install pip dependency.
pip install pip
The print output would look like the following.
certifi==2020.12.5
chardet==4.0.0
idna==2.10
numpy==1.20.3
oauthlib==3.1.0
pandas==1.2.4
pip==21.1.2
python-dateutil==2.8.1
pytz==2021.1
requests==2.25.1
requests-oauthlib==1.3.0
setuptools==41.2.0
six==1.16.0
urllib3==1.26.4

Python/Pip packaging; how to move built files to install directory

I have been working on a Python package which wraps some C++ libraries that need to be built from source. I build these with CMake, and I want the whole thing to be 'pip install'able in the end. I am almost there, however I am having problems getting the libraries built by CMake to end up in the final Python installation directory.
I managed to get them into the final 'wheel', oddly enough, but they aren't in my site_packages directory.
My setup.py file looks like this:
import os
import re
import sys
import sysconfig
import site
import platform
import subprocess
import pathlib
from distutils.version import LooseVersion
from setuptools import setup, Extension
from setuptools.command.build_ext import build_ext as build_ext_orig
class CMakeExtension(Extension):
def __init__(self, name, sourcedir=''):
Extension.__init__(self, name, sources=[])
self.sourcedir = os.path.abspath(sourcedir)
class CMakeBuild(build_ext_orig):
def run(self):
try:
out = subprocess.check_output(['cmake', '--version'])
except OSError:
raise RuntimeError("CMake must be installed to build the following extensions: " +
", ".join(e.name for e in self.extensions))
if platform.system() == "Windows":
raise RuntimeError("Sorry, pyScannerBit doesn't work on Windows platforms. Please use Linux or OSX.")
for ext in self.extensions:
self.build_extension(ext)
def build_extension(self, ext):
extdir = os.path.abspath(os.path.dirname(self.get_ext_fullpath(ext.name)))
cmake_args = ['-DCMAKE_LIBRARY_OUTPUT_DIRECTORY=' + extdir,
'-DPYTHON_EXECUTABLE=' + sys.executable,
'-DCMAKE_VERBOSE_MAKEFILE:BOOL=OFF',
'-Wno-dev',
'-DCMAKE_RUNTIME_OUTPUT_DIRECTORY=' + extdir,
'-DSCANNERBIT_STANDALONE=True',
'-DCMAKE_INSTALL_RPATH=$ORIGIN',
'-DCMAKE_BUILD_WITH_INSTALL_RPATH:BOOL=ON',
'-DCMAKE_INSTALL_RPATH_USE_LINK_PATH:BOOL=ON',
'-DCMAKE_INSTALL_PREFIX:PATH=' + extdir,
]
cfg = 'Debug' if self.debug else 'Release'
build_args = ['--config', cfg]
if platform.system() == "Windows":
cmake_args += ['-DCMAKE_LIBRARY_OUTPUT_DIRECTORY_{}={}'.format(cfg.upper(), extdir)]
if sys.maxsize > 2**32:
cmake_args += ['-A', 'x64']
build_args += ['--', '/m']
else:
cmake_args += ['-DCMAKE_BUILD_TYPE=' + cfg]
build_args += ['--', '-j2']
env = os.environ.copy()
env['CXXFLAGS'] = '{} -DVERSION_INFO=\\"{}\\"'.format(env.get('CXXFLAGS', ''),
self.distribution.get_version())
if not os.path.exists(self.build_temp):
os.makedirs(self.build_temp)
# untar ScannerBit tarball
subprocess.check_call(['tar','-C','pyscannerbit/scannerbit/untar/ScannerBit','-xf','pyscannerbit/scannerbit/ScannerBit_stripped.tar','--strip-components=1'], cwd=ext.sourcedir, env=env)
# First cmake
subprocess.check_call(['cmake', ext.sourcedir] + cmake_args, cwd=self.build_temp, env=env)
# Build all the scanners
subprocess.check_call(['cmake', '--build', '.', '--target', 'multinest'] + build_args, cwd=self.build_temp)
# Re-run cmake to detect built scanner plugins
subprocess.check_call(['cmake', ext.sourcedir], cwd=self.build_temp)
# Main build
subprocess.check_call(['cmake', '--build', '.'] + build_args, cwd=self.build_temp)
# Install
#subprocess.check_call(['cmake', '--build', '.', '--target', 'install'], cwd=self.build_temp)
setup(
name='pyscannerbit',
version='0.0.8',
author='Ben Farmer',
# Add yourself if you contribute to this package
author_email='ben.farmer#gmail.com',
description='A python interface to the GAMBIT scanning module, ScannerBit',
long_description='',
ext_modules=[CMakeExtension('_interface')],
cmdclass=dict(build_ext=CMakeBuild),
zip_safe=False,
packages=['pyscannerbit'],
)
As you can see, I am telling CMake to build the libraries in 'extdir', which it turns out is
/tmp/pip-req-build-d7mfvn1a/build/lib.linux-x86_64-3.6
I had assumed that the files would just be copied from here (or some other temporary directory?) into the final install path in bulk, but perhaps it doesn't work like that (though as I said earlier, these built files do end up in the wheel that is generated). Do these built files need to be added to MANIFEST.in or some 'package_data' entry or something like that? Currently they are not listed anywhere like that, since it was my understanding that those were for moving files around pre-build, not post-build. Currently I only use MANIFEST.in to make sure my sdist tarball gets filled correctly.
For completeness, I am building the package with pip as follows:
python setup.py sdist
pip install -v dist/pyscannerbit-0.0.8.tar.gz
This is just so I know that the build from the tarball works, for later use with PyPI.
The source is on github if you want to try it out: https://github.com/bjfar/pyscannerbit
Ok so it seems that I just had the paths a bit wrong. I previously was setting the CMAKE_LIBRARY_OUTPUT_DIRECTORY to
extdir = os.path.abspath(os.path.dirname(self.get_ext_fullpath(ext.name)))
However I needed to point it to
extdir+'/pyscannerbit'
where pyscannerbit is the name of the package. Otherwise the files end up in the parent directory where the build occurs, but not inside the project directory. So then they don't subsequently get copied to the install path.

Check if a Debian package is installed from Python

Is there an elegant and more Python-like way to check if a package is installed on Debian?
In a bash script, I'd do:
dpkg -s packagename | grep Status
Suggestions to do the same in a Python script?
This is a pythonic way:
import apt
cache = apt.Cache()
if cache['package-name'].is_installed:
print "YES it's installed"
else:
print "NO it's NOT installed"
A slightly nicer, hopefully idiomatic version of your bash example:
import os, subprocess
devnull = open(os.devnull,"w")
retval = subprocess.call(["dpkg","-s","coreutils"],stdout=devnull,stderr=subprocess.STDOUT)
devnull.close()
if retval != 0:
print "Package coreutils not installed."
If you are checking for the existence of a package that installs a Python module, you can test for this from within a dependent Python script - try to import it and see if you get an exception:
import sys
try:
import maybe
except ImportError:
print "Sorry, must install the maybe package to run this program."
sys.exit(1)
This is some code that would give you a neat way to display if the package is installed or not (without triggering a messy error message on the screen). This works in Python 3 only, though.
import apt
cache = apt.Cache()
cache.open()
response = "Package Installed."
try:
cache['notapkg'].is_installed
except KeyError:
response = "Package Not Installed."
print(response)
Have a look at commands. It's very useful for running things on the command line and getting the status.
Otherwise, I'm sure there is some library that will let you interact with apt. python-apt might work but it's a bit raw. Just capturing the command line seems easier.
I needed a cross-platform compatible solution so I ended up using which.
import subprocess
retval = subprocess.call(["which", "packagename"])
if retval != 0:
print("Packagename not installed!")
Although it's not as pythonic as the above answers it does work on most platforms.
Inspired by the previous answers, this works nicely for both Python 2 and Python 3 and avoids try/catch for the key error:
import apt
package = 'foo' # insert your package name here
cache = apt.Cache()
package_installed = False
if package in cache:
package_installed = cache[package].is_installed
I had the same doubt. Searched every corner in the Internet but couldn't find it.
But finally after some Experiments I DID IT!!.
Code:
import os
packagename = "figlet" # Type in your package name
os.system("dpkg -s "+packagename" | grep Status")
To type in any terminal using python codes:
Code:
import os
os.system("YOUR TERMINAL COMMAND HERE")

Categories