I have been working on a Python package which wraps some C++ libraries that need to be built from source. I build these with CMake, and I want the whole thing to be 'pip install'able in the end. I am almost there, however I am having problems getting the libraries built by CMake to end up in the final Python installation directory.
I managed to get them into the final 'wheel', oddly enough, but they aren't in my site_packages directory.
My setup.py file looks like this:
import os
import re
import sys
import sysconfig
import site
import platform
import subprocess
import pathlib
from distutils.version import LooseVersion
from setuptools import setup, Extension
from setuptools.command.build_ext import build_ext as build_ext_orig
class CMakeExtension(Extension):
def __init__(self, name, sourcedir=''):
Extension.__init__(self, name, sources=[])
self.sourcedir = os.path.abspath(sourcedir)
class CMakeBuild(build_ext_orig):
def run(self):
try:
out = subprocess.check_output(['cmake', '--version'])
except OSError:
raise RuntimeError("CMake must be installed to build the following extensions: " +
", ".join(e.name for e in self.extensions))
if platform.system() == "Windows":
raise RuntimeError("Sorry, pyScannerBit doesn't work on Windows platforms. Please use Linux or OSX.")
for ext in self.extensions:
self.build_extension(ext)
def build_extension(self, ext):
extdir = os.path.abspath(os.path.dirname(self.get_ext_fullpath(ext.name)))
cmake_args = ['-DCMAKE_LIBRARY_OUTPUT_DIRECTORY=' + extdir,
'-DPYTHON_EXECUTABLE=' + sys.executable,
'-DCMAKE_VERBOSE_MAKEFILE:BOOL=OFF',
'-Wno-dev',
'-DCMAKE_RUNTIME_OUTPUT_DIRECTORY=' + extdir,
'-DSCANNERBIT_STANDALONE=True',
'-DCMAKE_INSTALL_RPATH=$ORIGIN',
'-DCMAKE_BUILD_WITH_INSTALL_RPATH:BOOL=ON',
'-DCMAKE_INSTALL_RPATH_USE_LINK_PATH:BOOL=ON',
'-DCMAKE_INSTALL_PREFIX:PATH=' + extdir,
]
cfg = 'Debug' if self.debug else 'Release'
build_args = ['--config', cfg]
if platform.system() == "Windows":
cmake_args += ['-DCMAKE_LIBRARY_OUTPUT_DIRECTORY_{}={}'.format(cfg.upper(), extdir)]
if sys.maxsize > 2**32:
cmake_args += ['-A', 'x64']
build_args += ['--', '/m']
else:
cmake_args += ['-DCMAKE_BUILD_TYPE=' + cfg]
build_args += ['--', '-j2']
env = os.environ.copy()
env['CXXFLAGS'] = '{} -DVERSION_INFO=\\"{}\\"'.format(env.get('CXXFLAGS', ''),
self.distribution.get_version())
if not os.path.exists(self.build_temp):
os.makedirs(self.build_temp)
# untar ScannerBit tarball
subprocess.check_call(['tar','-C','pyscannerbit/scannerbit/untar/ScannerBit','-xf','pyscannerbit/scannerbit/ScannerBit_stripped.tar','--strip-components=1'], cwd=ext.sourcedir, env=env)
# First cmake
subprocess.check_call(['cmake', ext.sourcedir] + cmake_args, cwd=self.build_temp, env=env)
# Build all the scanners
subprocess.check_call(['cmake', '--build', '.', '--target', 'multinest'] + build_args, cwd=self.build_temp)
# Re-run cmake to detect built scanner plugins
subprocess.check_call(['cmake', ext.sourcedir], cwd=self.build_temp)
# Main build
subprocess.check_call(['cmake', '--build', '.'] + build_args, cwd=self.build_temp)
# Install
#subprocess.check_call(['cmake', '--build', '.', '--target', 'install'], cwd=self.build_temp)
setup(
name='pyscannerbit',
version='0.0.8',
author='Ben Farmer',
# Add yourself if you contribute to this package
author_email='ben.farmer#gmail.com',
description='A python interface to the GAMBIT scanning module, ScannerBit',
long_description='',
ext_modules=[CMakeExtension('_interface')],
cmdclass=dict(build_ext=CMakeBuild),
zip_safe=False,
packages=['pyscannerbit'],
)
As you can see, I am telling CMake to build the libraries in 'extdir', which it turns out is
/tmp/pip-req-build-d7mfvn1a/build/lib.linux-x86_64-3.6
I had assumed that the files would just be copied from here (or some other temporary directory?) into the final install path in bulk, but perhaps it doesn't work like that (though as I said earlier, these built files do end up in the wheel that is generated). Do these built files need to be added to MANIFEST.in or some 'package_data' entry or something like that? Currently they are not listed anywhere like that, since it was my understanding that those were for moving files around pre-build, not post-build. Currently I only use MANIFEST.in to make sure my sdist tarball gets filled correctly.
For completeness, I am building the package with pip as follows:
python setup.py sdist
pip install -v dist/pyscannerbit-0.0.8.tar.gz
This is just so I know that the build from the tarball works, for later use with PyPI.
The source is on github if you want to try it out: https://github.com/bjfar/pyscannerbit
Ok so it seems that I just had the paths a bit wrong. I previously was setting the CMAKE_LIBRARY_OUTPUT_DIRECTORY to
extdir = os.path.abspath(os.path.dirname(self.get_ext_fullpath(ext.name)))
However I needed to point it to
extdir+'/pyscannerbit'
where pyscannerbit is the name of the package. Otherwise the files end up in the parent directory where the build occurs, but not inside the project directory. So then they don't subsequently get copied to the install path.
Related
What is the most efficient way to list all dependencies required to deploy a working project elsewhere (on a different OS, say)?
Python 2.7, Windows dev environment, not using a virtualenv per project, but a global dev environment, installing libraries as needed, happily hopping from one project to the next.
I've kept track of most (not sure all) libraries I had to install for a given project. I have not kept track of any sub-dependencies that came auto-installed with them. Doing pip freeze lists both, plus all the other libraries that were ever installed.
Is there a way to list what you need to install, no more, no less, to deploy the project?
EDIT In view of the answers below, some clarification. My project consists of a bunch of modules (that I wrote), each with a bunch of imports. Should I just copy-paste all the imports from all modules into a single file, sort eliminating duplicates, and throw out all from the standard library (and how do I know they are)? Or is there a better way? That's the question.
pipreqs solves the problem. It generates project-level requirement.txt file.
Install pipreqs: pip install pipreqs
Generate project-level requirement.txt file: pipreqs /path/to/your/project/
requirements file would be saved in /path/to/your/project/requirements.txt
If you want to read more advantages of pipreqs over pip freeze, read it from here
Scan your import statements. Chances are you only import things you explicitly wanted to import, and not the dependencies.
Make a list like the one pip freeze does, then create and activate a virtualenv.
Do pip install -r your_list, and try to run your code in that virtualenv. Heed any ImportError exceptions, match them to packages, and add to your list. Repeat until your code runs without problems.
Now you have a list to feed to pip install on your deployment site.
This is extremely manual, but requires no external tools, and forces you to make sure that your code runs. (Running your test suite as a check is great but not sufficient.)
On your terminal type:
pip install pipdeptree
cd <your project root>
pipdeptree
I found the answers here didn't work too well for me as I only wanted the imports from inside our repository (eg. import requests I don't need, but from my.module.x import y I do need).
I noticed that PyInstaller had perfectly good functionality for this though. I did a bit of digging and managed to find their dependency graph code, then just created a function to do what I wanted with a bit of trial and error. I made a gist here since I'll likely need it again in the future, but here is the code:
import os
from PyInstaller.depend.analysis import initialize_modgraph
def get_import_dependencies(*scripts):
"""Get a list of all imports required.
Args: script filenames.
Returns: list of imports
"""
script_nodes = []
scripts = set(map(os.path.abspath, scripts))
# Process the scripts and build the map of imports
graph = initialize_modgraph()
for script in scripts:
graph.run_script(script)
for node in graph.nodes():
if node.filename in scripts:
script_nodes.append(node)
# Search the imports to find what is in use
dependency_nodes = set()
def search_dependencies(node):
for reference in graph.getReferences(node):
if reference not in dependency_nodes:
dependency_nodes.add(reference)
search_dependencies(reference)
for script_node in script_nodes:
search_dependencies(script_node)
return list(sorted(dependency_nodes))
if __name__ == '__main__':
# Show the PyInstaller imports used in this file
for node in get_import_dependencies(__file__):
if node.identifier.split('.')[0] == 'PyInstaller':
print(node)
All the node types are defined in PyInstaller.lib.modulegraph.modulegraph, such as SourceModule, MissingModule, Package and BuiltinModule. These will come in useful when performing checks.
Each of these has an identifier (path.to.my.module), and depending on the node type, it may have a filename (C:/path/to/my/module/__init__.py), and packagepath (['C:/path/to/my/module']).
I can't really post any extra code as it is quite specific to our setup with using pyarmor with PyInstaller, I can happily say it works flawlessly so far though.
You could use the findpydeps module I wrote:
Install it via pip: pip install findpydeps
If you have a main file: findpydeps -l -i path/to/main.py (the -l will follow the imports in the file)
Or your code is in a folder: findpydeps -i path/to/folder
Most importantly, the output is pip-friendly:
do findpydeps -i . > requirements.txt (assuming . is your project's directory)
then pip install -r requirements.txt
You can of course search through multiple directories and files for requirements, like: findpydeps -i path/to/file1.py path/to/folder path/to/file2.py, etc.
By default, it will remove the packages that are in the python standard library, as well as local packages. Refer to the -r/--removal-policy argument for more info.
If you don't want imports that are done in if, try/except or with blocks, just add --no-blocks. The same goes for functions with --no-functions.
Anyway, you got the idea: there are a lot of options (most of them are not discussed here). Refer the findpydeps -h output for more help!
The way to do this is analyze your imports. To automate that, check out Snakefood. Then you can make a requirements.txt file and get on your way to using virtualenv.
The following will list the dependencies, excluding modules from the standard library:
sfood -fuq package.py | sfood-filter-stdlib | sfood-target-files
Related questions:
Get a list of python packages used by a Django Project
list python package dependencies without loading them?
You can simply use pipreqs, install it using:
pip install pipreqs
Then, type: pipreqs . on the files directory.
A text file named requirements will be created for you, which looks like this:
numpy==1.21.1
pytest==6.2.4
matplotlib==3.4.2
PySide2==5.15.2
I would just run something like this:
import importlib
import os
import pathlib
import re
import sys, chardet
from sty import fg
sys.setrecursionlimit(100000000)
dependenciesPaths = list()
dependenciesNames = list()
paths = sys.path
red = fg(255, 0, 0)
green = fg(0, 200, 0)
end = fg.rs
def main(path):
try:
print("Finding imports in '" + path + "':")
file = open(path)
contents = file.read()
wordArray = re.split(" |\n", contents)
currentList = list()
nextPaths = list()
skipWord = -1
for wordNumb in range(len(wordArray)):
word = wordArray[wordNumb]
if wordNumb == skipWord:
continue
elif word == "from":
currentList.append(wordArray[wordNumb + 1])
skipWord = wordNumb + 2
elif word == "import":
currentList.append(wordArray[wordNumb + 1])
currentList = set(currentList)
for i in currentList:
print(i)
print("Found imports in '" + path + "'")
print("Finding paths for imports in '" + path + "':")
currentList2 = currentList.copy()
currentList = list()
for i in currentList2:
if i in dependenciesNames:
print(i, "already found")
else:
dependenciesNames.append(i)
try:
fileInfo = importlib.machinery.PathFinder().find_spec(i)
print(fileInfo.origin)
dependenciesPaths.append(fileInfo.origin)
currentList.append(fileInfo.origin)
except AttributeError as e:
print(e)
print(i)
print(importlib.machinery.PathFinder().find_spec(i))
# print(red, "Odd noneType import called ", i, " in path ", path, end, sep='')
print("Found paths for imports in '" + path + "'")
for fileInfo in currentList:
main(fileInfo)
except Exception as e:
print(e)
if __name__ == "__main__":
# args
args = sys.argv
print(args)
if len(args) == 2:
p = args[1]
elif len(args) == 3:
p = args[1]
open(args[2], "a").close()
sys.stdout = open(args[2], "w")
else:
print('Usage')
print('PyDependencies <InputFile>')
print('PyDependencies <InputFile> <OutputFile')
sys.exit(2)
if not os.path.exists(p):
print(red, "Path '" + p + "' is not a real path", end, sep='')
elif os.path.isdir(p):
print(red, "Path '" + p + "' is a directory, not a file", end, sep='')
elif "".join(pathlib.Path(p).suffixes) != ".py":
print(red, "Path '" + p + "' is not a python file", end, sep='')
else:
print(green, "Path '" + p + "' is a valid python file", end, sep='')
main(p)
deps = set(dependenciesNames)
print(deps)
sys.exit()
If you're using an Anaconda virtual environment, you can run the below command inside the environment to create a txt file of all the dependencies used in the project.
conda list -e > requirements.txt
This answer is to help someone list all dependencies with versions from the Python script itself. This will list all dependencies in the user virtual environment.
from pip._internal.operations import freeze
x = freeze.freeze()
for dependency in x:
print(dependency)
for this you need to install pip as a dependency. Use the following command to install pip dependency.
pip install pip
The print output would look like the following.
certifi==2020.12.5
chardet==4.0.0
idna==2.10
numpy==1.20.3
oauthlib==3.1.0
pandas==1.2.4
pip==21.1.2
python-dateutil==2.8.1
pytz==2021.1
requests==2.25.1
requests-oauthlib==1.3.0
setuptools==41.2.0
six==1.16.0
urllib3==1.26.4
What is the most efficient way to list all dependencies required to deploy a working project elsewhere (on a different OS, say)?
Python 2.7, Windows dev environment, not using a virtualenv per project, but a global dev environment, installing libraries as needed, happily hopping from one project to the next.
I've kept track of most (not sure all) libraries I had to install for a given project. I have not kept track of any sub-dependencies that came auto-installed with them. Doing pip freeze lists both, plus all the other libraries that were ever installed.
Is there a way to list what you need to install, no more, no less, to deploy the project?
EDIT In view of the answers below, some clarification. My project consists of a bunch of modules (that I wrote), each with a bunch of imports. Should I just copy-paste all the imports from all modules into a single file, sort eliminating duplicates, and throw out all from the standard library (and how do I know they are)? Or is there a better way? That's the question.
pipreqs solves the problem. It generates project-level requirement.txt file.
Install pipreqs: pip install pipreqs
Generate project-level requirement.txt file: pipreqs /path/to/your/project/
requirements file would be saved in /path/to/your/project/requirements.txt
If you want to read more advantages of pipreqs over pip freeze, read it from here
Scan your import statements. Chances are you only import things you explicitly wanted to import, and not the dependencies.
Make a list like the one pip freeze does, then create and activate a virtualenv.
Do pip install -r your_list, and try to run your code in that virtualenv. Heed any ImportError exceptions, match them to packages, and add to your list. Repeat until your code runs without problems.
Now you have a list to feed to pip install on your deployment site.
This is extremely manual, but requires no external tools, and forces you to make sure that your code runs. (Running your test suite as a check is great but not sufficient.)
On your terminal type:
pip install pipdeptree
cd <your project root>
pipdeptree
I found the answers here didn't work too well for me as I only wanted the imports from inside our repository (eg. import requests I don't need, but from my.module.x import y I do need).
I noticed that PyInstaller had perfectly good functionality for this though. I did a bit of digging and managed to find their dependency graph code, then just created a function to do what I wanted with a bit of trial and error. I made a gist here since I'll likely need it again in the future, but here is the code:
import os
from PyInstaller.depend.analysis import initialize_modgraph
def get_import_dependencies(*scripts):
"""Get a list of all imports required.
Args: script filenames.
Returns: list of imports
"""
script_nodes = []
scripts = set(map(os.path.abspath, scripts))
# Process the scripts and build the map of imports
graph = initialize_modgraph()
for script in scripts:
graph.run_script(script)
for node in graph.nodes():
if node.filename in scripts:
script_nodes.append(node)
# Search the imports to find what is in use
dependency_nodes = set()
def search_dependencies(node):
for reference in graph.getReferences(node):
if reference not in dependency_nodes:
dependency_nodes.add(reference)
search_dependencies(reference)
for script_node in script_nodes:
search_dependencies(script_node)
return list(sorted(dependency_nodes))
if __name__ == '__main__':
# Show the PyInstaller imports used in this file
for node in get_import_dependencies(__file__):
if node.identifier.split('.')[0] == 'PyInstaller':
print(node)
All the node types are defined in PyInstaller.lib.modulegraph.modulegraph, such as SourceModule, MissingModule, Package and BuiltinModule. These will come in useful when performing checks.
Each of these has an identifier (path.to.my.module), and depending on the node type, it may have a filename (C:/path/to/my/module/__init__.py), and packagepath (['C:/path/to/my/module']).
I can't really post any extra code as it is quite specific to our setup with using pyarmor with PyInstaller, I can happily say it works flawlessly so far though.
You could use the findpydeps module I wrote:
Install it via pip: pip install findpydeps
If you have a main file: findpydeps -l -i path/to/main.py (the -l will follow the imports in the file)
Or your code is in a folder: findpydeps -i path/to/folder
Most importantly, the output is pip-friendly:
do findpydeps -i . > requirements.txt (assuming . is your project's directory)
then pip install -r requirements.txt
You can of course search through multiple directories and files for requirements, like: findpydeps -i path/to/file1.py path/to/folder path/to/file2.py, etc.
By default, it will remove the packages that are in the python standard library, as well as local packages. Refer to the -r/--removal-policy argument for more info.
If you don't want imports that are done in if, try/except or with blocks, just add --no-blocks. The same goes for functions with --no-functions.
Anyway, you got the idea: there are a lot of options (most of them are not discussed here). Refer the findpydeps -h output for more help!
The way to do this is analyze your imports. To automate that, check out Snakefood. Then you can make a requirements.txt file and get on your way to using virtualenv.
The following will list the dependencies, excluding modules from the standard library:
sfood -fuq package.py | sfood-filter-stdlib | sfood-target-files
Related questions:
Get a list of python packages used by a Django Project
list python package dependencies without loading them?
You can simply use pipreqs, install it using:
pip install pipreqs
Then, type: pipreqs . on the files directory.
A text file named requirements will be created for you, which looks like this:
numpy==1.21.1
pytest==6.2.4
matplotlib==3.4.2
PySide2==5.15.2
I would just run something like this:
import importlib
import os
import pathlib
import re
import sys, chardet
from sty import fg
sys.setrecursionlimit(100000000)
dependenciesPaths = list()
dependenciesNames = list()
paths = sys.path
red = fg(255, 0, 0)
green = fg(0, 200, 0)
end = fg.rs
def main(path):
try:
print("Finding imports in '" + path + "':")
file = open(path)
contents = file.read()
wordArray = re.split(" |\n", contents)
currentList = list()
nextPaths = list()
skipWord = -1
for wordNumb in range(len(wordArray)):
word = wordArray[wordNumb]
if wordNumb == skipWord:
continue
elif word == "from":
currentList.append(wordArray[wordNumb + 1])
skipWord = wordNumb + 2
elif word == "import":
currentList.append(wordArray[wordNumb + 1])
currentList = set(currentList)
for i in currentList:
print(i)
print("Found imports in '" + path + "'")
print("Finding paths for imports in '" + path + "':")
currentList2 = currentList.copy()
currentList = list()
for i in currentList2:
if i in dependenciesNames:
print(i, "already found")
else:
dependenciesNames.append(i)
try:
fileInfo = importlib.machinery.PathFinder().find_spec(i)
print(fileInfo.origin)
dependenciesPaths.append(fileInfo.origin)
currentList.append(fileInfo.origin)
except AttributeError as e:
print(e)
print(i)
print(importlib.machinery.PathFinder().find_spec(i))
# print(red, "Odd noneType import called ", i, " in path ", path, end, sep='')
print("Found paths for imports in '" + path + "'")
for fileInfo in currentList:
main(fileInfo)
except Exception as e:
print(e)
if __name__ == "__main__":
# args
args = sys.argv
print(args)
if len(args) == 2:
p = args[1]
elif len(args) == 3:
p = args[1]
open(args[2], "a").close()
sys.stdout = open(args[2], "w")
else:
print('Usage')
print('PyDependencies <InputFile>')
print('PyDependencies <InputFile> <OutputFile')
sys.exit(2)
if not os.path.exists(p):
print(red, "Path '" + p + "' is not a real path", end, sep='')
elif os.path.isdir(p):
print(red, "Path '" + p + "' is a directory, not a file", end, sep='')
elif "".join(pathlib.Path(p).suffixes) != ".py":
print(red, "Path '" + p + "' is not a python file", end, sep='')
else:
print(green, "Path '" + p + "' is a valid python file", end, sep='')
main(p)
deps = set(dependenciesNames)
print(deps)
sys.exit()
If you're using an Anaconda virtual environment, you can run the below command inside the environment to create a txt file of all the dependencies used in the project.
conda list -e > requirements.txt
This answer is to help someone list all dependencies with versions from the Python script itself. This will list all dependencies in the user virtual environment.
from pip._internal.operations import freeze
x = freeze.freeze()
for dependency in x:
print(dependency)
for this you need to install pip as a dependency. Use the following command to install pip dependency.
pip install pip
The print output would look like the following.
certifi==2020.12.5
chardet==4.0.0
idna==2.10
numpy==1.20.3
oauthlib==3.1.0
pandas==1.2.4
pip==21.1.2
python-dateutil==2.8.1
pytz==2021.1
requests==2.25.1
requests-oauthlib==1.3.0
setuptools==41.2.0
six==1.16.0
urllib3==1.26.4
I have an open source project (GridCal) and I tell users to install the package with pip install GridCal or pip3 install GridCal for unix systems.
The setup file is this:
from distutils.core import setup
import sys
import os
import platform
from GridCal.grid.CalculationEngine import __GridCal_VERSION__
name = "GridCal"
version = str(__GridCal_VERSION__)
description = "Research Oriented electrical simulation software."
# Python 3.5 or later needed
if sys.version_info < (3, 5, 0, 'final', 0):
raise (SystemExit, 'Python 3.5 or later is required!')
# Build a list of all project modules
packages = []
for dir_name, dir_names, file_names in os.walk(name):
if '__init__.py' in file_names:
packages.append(dir_name.replace('/', '.'))
package_dir = {name: name}
# Data_files (e.g. doc) needs (directory, files-in-this-directory) tuples
data_files = []
for dir_name, dir_names, file_names in os.walk('doc'):
files_list = []
for filename in file_names:
fullname = os.path.join(dir_name, filename)
files_list.append(fullname)
data_files.append(('share/' + name + '/' + dir_name, files_list))
if platform.system() == 'Windows':
# list the packages (On windows anaconda is assumed)
required_packages = ["numpy",
"scipy",
"networkx",
"pandas",
"xlwt",
"xlrd",
# "PyQt5",
"matplotlib",
"qtconsole",
"pysot",
"openpyxl",
"pulp"
]
else:
# make the desktop entry
make_linux_desktop_file(version_=version, comment=description)
# list the packages
required_packages = ["numpy",
"scipy",
"networkx",
"pandas",
"xlwt",
"xlrd",
"PyQt5",
"matplotlib",
"qtconsole",
"pysot",
"openpyxl",
"pulp"
]
# Read the license
with open('LICENSE.txt', 'r') as f:
license_text = f.read()
setup(
# Application name:
name=name,
# Version number (initial):
version=version,
# Application author details:
author="Santiago Peñate Vera",
author_email="santiago.penate.vera#gmail.com",
# Packages
packages=packages,
data_files=data_files,
# Include additional files into the package
include_package_data=True,
# Details
url="http://pypi.python.org/pypi/GridCal/",
# License file
license=license_text,
# description
description=description,
# long_description=open("README.txt").read(),
# Dependent packages (distributions)
install_requires=required_packages,
setup_requires=required_packages
)
From time to time I get users reports saying that the program is missing modules: https://github.com/SanPen/GridCal/issues/12
I have specified the list of packages both in install_requires and setup_requires.
Is this a pip bug, or shall I do something else?
Your setup.py imports GridCal.grid.CalculationEngine which imports almost all of your dependencies. I.e. your setup.py imports dependencies before installing them.
Try to install it in a new empty virtual env detached from your global site-packages — that surely doesn't work:
$ virtualenv --no-site-packages --python python3.4 test-gcal
Running virtualenv with interpreter /usr/bin/python3.4
Using base prefix '/usr'
New python executable in /home/phd/tmp/test-gcal/bin/python3.4
Also creating executable in /home/phd/tmp/test-gcal/bin/python
Installing setuptools, pip, wheel...done.
$ source test-gcal/bin/activate
$ pip install GridCal
Collecting GridCal
Using cached GridCal-1.85.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-build-c7q9pbep/GridCal/setup.py", line 5, in <module>
from GridCal.grid.CalculationEngine import __GridCal_VERSION__
File "/tmp/pip-build-c7q9pbep/GridCal/GridCal/grid/CalculationEngine.py", line 18, in <module>
from GridCal.grid.JacobianBased import IwamotoNR, Jacobian, LevenbergMarquardtPF
File "/tmp/pip-build-c7q9pbep/GridCal/GridCal/grid/JacobianBased.py", line 19, in <module>
from numpy import array, angle, exp, linalg, r_, Inf, conj, diag, asmatrix, asarray, zeros_like, zeros, complex128, \
ImportError: No module named 'numpy'
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-c7q9pbep/GridCal/
The fix is relatively straightforward: you have to move __GridCal_VERSION__ from GridCal/Engine/CalculationEngine.py to a separate GridCal/version.py (or __version__.py or something like this) and do from GridCal.version import __GridCal_VERSION__ in setup.py.
Please remember that import would work only if your GridCal/__init__.py is empty or only imports builtin/standard modules. If said __init__.py directly or indirectly imports a (not yet installed) dependency version.py could not be imported. There is a way to overcome this in setup.py but I skip it for now. If your ever will need the solution — ask again.
I am making a Python package that has a C++-extension module and someone else's shared library that it requires. I want everything installable via pip. My current setup.py file works when I use pip install -e . but when I don't use develop mode (e.i. omit the -e) I get "cannot open shared object file" when importing the module in Python. I believe the reason is that setuptools doesn't consider the shared library to be part of my package, so the relative link to the library is broken during installation when files are copied to the install directory.
Here is what my setup.py file looks like:
from setuptools import setup, Extension, Command
import setuptools.command.develop
import setuptools.command.build_ext
import setuptools.command.install
import distutils.command.build
import subprocess
import sys
import os
# This function downloads and builds the shared-library
def run_clib_install_script():
build_clib_cmd = ['bash', 'clib_install.sh']
if subprocess.call(build_clib_cmd) != 0:
sys.exit("Failed to build C++ dependencies")
# I make a new command that will build the shared-library
class build_clib(Command):
user_options = []
def initialize_options(self):
pass
def finalize_options(self):
pass
def run(self):
run_clib_install_script()
# I subclass install so that it will call my new command
class install(setuptools.command.install.install):
def run(self):
self.run_command('build_clib')
setuptools.command.install.install.run(self)
# I do the same for build...
class build(distutils.command.build.build):
sub_commands = [
('build_clib', lambda self: True),
] + distutils.command.build.build.sub_commands
# ...and the same for develop
class develop(setuptools.command.develop.develop):
def run(self):
self.run_command('build_clib')
setuptools.command.develop.develop.run(self)
# These are my includes...
# note that /clib/include only exists after calling clib_install.sh
cwd = os.path.dirname(os.path.abspath(__file__))
include_dirs = [
cwd,
cwd + '/clib/include',
cwd + '/common',
]
# These are my arguments for the compiler to my shared-library
lib_path = os.path.join(cwd, "clib", "lib")
library_dirs = [lib_path]
link_args = [os.path.join(lib_path, "libclib.so")]
# My extension module gets these arguments so it can link to clib
mygen_module = Extension('mygen',
language="c++14",
sources=["common/mygen.cpp"],
libraries=['clib'],
extra_compile_args=['-std=c++14'],
include_dirs=include_dirs,
library_dirs=library_dirs,
extra_link_args=link_args
+ ['-Wl,-rpath,$ORIGIN/../clib/lib'])
# I use cmdclass to override the default setuptool commands
setup(name='mypack',
cmdclass = {'install': install,
'build_clib': build_clib, 'build': build,
'develop': develop},
packages=['mypack'],
ext_package='mypack',
ext_modules=[mygen_module],
# package_dir={'mypack': '.'},
# package_data={'mypack': ['docs/*md']},
include_package_data=True)
I subclass some of the setuptools commands in order to build the shared-library before it compiles the extension. clib_install.sh is a bash script that locally downloads and builds the shared library in /clib, creating the headers (in /clib/include) and .so file (in /clib/lib). To solve problems with linking to shared-library dependencies I used $ORIGIN/../clib/lib as a link argument so that the absolute path to clib isn't needed.
Unfortunately, the /clib directory doesn't get copied to the install location. I tried tinkering with package_data but it didn't copy my directory over. In fact, I don't even know what pip/setuptools does with /clib after the script is called, I guess it is made in some temporary build directory and gets deleted after. I am not sure how to get /clib to where it needs to be after it is made.
package_data={
'mypack': [
'clib/include/*.h',
'clib/lib/*.so',
'docs/*md',
]
},
I'm using Python 2.6 and cx_Freeze 4.1.2 on a Windows system. I've created the setup.py to build my executable and everything works fine.
When cx_Freeze runs, it moves everything to the build directory. I have some other files that I would like included in my build directory. How can I do this? Here's my structure:
src\
setup.py
janitor.py
README.txt
CHNAGELOG.txt
helpers\
uncompress\
unRAR.exe
unzip.exe
Here's my snippet:
setup
( name='Janitor',
version='1.0',
description='Janitor',
author='John Doe',
author_email='john.doe#gmail.com',
url='http://www.this-page-intentionally-left-blank.org/',
data_files =
[ ('helpers\uncompress', ['helpers\uncompress\unzip.exe']),
('helpers\uncompress', ['helpers\uncompress\unRAR.exe']),
('', ['README.txt'])
],
executables =
[
Executable\
(
'janitor.py', #initScript
)
]
)
I can't seem to get this to work. Do I need a MANIFEST.in file?
Figured it out.
from cx_Freeze import setup,Executable
includefiles = ['README.txt', 'CHANGELOG.txt', 'helpers\uncompress\unRAR.exe', , 'helpers\uncompress\unzip.exe']
includes = []
excludes = ['Tkinter']
packages = ['do','khh']
setup(
name = 'myapp',
version = '0.1',
description = 'A general enhancement utility',
author = 'lenin',
author_email = 'le...#null.com',
options = {'build_exe': {'includes':includes,'excludes':excludes,'packages':packages,'include_files':includefiles}},
executables = [Executable('janitor.py')]
)
Note:
include_files must contain "only" relative paths to the setup.py script else the build will fail.
include_files can be a list of string i.e a bunch of files with their relative paths
or
include_files can be a list of tuples in which the first half of the tuple is the file name with the absolute path and the second half is the destination filename with the absolute path.
(When the lack of the documentation arises, consult Kermit the Frog)
There's a more complex example at: cx_freeze - wxPyWiki
The lacking documentation of all the options is at: cx_Freeze (Internet Archive)
With cx_Freeze, I still get a build output of 11 files in a single folder, though, unlike with Py2Exe.
Alternatives: Packaging | The Mouse Vs. Python
Also you can create separate script that will copy files after the build. It's what I use to rebuild the app on windows (you should have "GNU utilities for win32" installed to make "cp" works).
build.bat:
cd .
del build\*.* /Q
python setup.py build
cp -r icons build/exe.win32-2.7/
cp -r interfaces build/exe.win32-2.7/
cp -r licenses build/exe.win32-2.7/
cp -r locale build/exe.win32-2.7/
pause
In order to find your attached files (include_files = [-> your attached files <-]) you should insert the following function in your setup.py code:
def find_data_file(filename):
if getattr(sys, 'frozen', False):
# The application is frozen
datadir = os.path.dirname(sys.executable)
else:
# The application is not frozen
# Change this bit to match where you store your data files:
datadir = os.path.dirname(__file__)
return os.path.join(datadir, filename)
See cx-freeze: using data files