PyPa setup.py test scripts - python

I'm attempting to follow the advice and structure written in the python packaging docs. In the setup function you can specify dependencies for tests with tests_require. And you can run scripts on install just by specifying scripts. But can I have a script that is only run in the case of a setup for testing?
edit: Important parts of my setup.py
from setuptools import setup
# To use a consistent encoding
from codecs import open
from os import path
import subprocess
from setuptools import setup
from setuptools.command.test import test
class setupTestRequirements(test, object):
def run_script(self):
cmd = ['bash', 'bin/test_dnf_requirements.sh']
p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
ret = p.communicate()
print(ret[0])
def run(self):
self.run_script()
super(setupTestRequirements, self).run()
...
setup(
...
scripts=['bin/functional_dnf_requirements.sh'],
install_requires=['jira', 'PyYAML'],
tests_require=['flake8', 'autopep8', 'mock'],
cmdclass={'test': setupTestRequirements}
)

It does not mean the files in scripts will be executed on package installation. The keyword scripts is used to mark python files in your package that are intended to run as standalone programs after package installation (maybe the name is a bit misleading). Example: you have a file spam with the content:
#!/usr/bin/env python
if __name__ == '__main__':
print('eggs!')
If you mark this file as script by adding it to the scripts in your setup.py:
from setuptools import setup
setup(
...
scripts=['spam'],
)
after the package installation you can run spam as a standalone program in your terminal:
$ spam
eggs!
Read this tutorial for more info on command line scripts.
Now, if you want to execute custom code on testing, you have to override the default test command. In your setup.py:
from setuptools.command.test import test
class MyCustomTest(test):
def run(self):
print('>>>> this is my custom test command <<<<')
super().run()
setup(
...
cmdclass={'test': MyCustomTest}
)
Now you will notice an additional print when running tests:
$ python setup.py test
running test
>>>> this is my custom test command <<<<
running egg_info
...
running build_ext
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK
Edit: if you want to run a custom bash script before executing tests, adapt the MyCustomTest.run() method. Example script script.sh:
#!/usr/bin/env bash
echo -n ">>> this is my custom bash script <<<"
Adapting MyCustomTest class in setup.py:
import subprocess
from setuptools import setup
from setuptools.command.test import test
class MyCustomTest(test):
def run_some_script(self):
cmd = ['bash', 'script.sh']
# python3.5 and above
# ret = subprocess.run(cmd, stdout=subprocess.PIPE, universal_newlines=True)
# print(ret.stdout)
# old python2 versions
p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
ret = p.communicate()
print(ret[0])
def run(self):
self.run_some_script()
super().run()
Output:
$ python setup.py test
running test
>>> this is my custom bash script <<<
running egg_info
writing spam.egg-info/PKG-INFO
writing dependency_links to spam.egg-info/dependency_links.txt
writing top-level names to spam.egg-info/top_level.txt
reading manifest file 'spam.egg-info/SOURCES.txt'
writing manifest file 'spam.egg-info/SOURCES.txt'
running build_ext
----------------------------------------------------------------------
Ran 0 tests in 0.000s
OK

Related

How do I generate python grpc code from within a setuptools installer (setup.py)?

We have some proto files for gRPC in a repo and I read that it is not good to commit generated code. So I figured I need to have the generation as part of the package installation (e.g. setuptools, setup.py)
However, to generate gRPC code, you need to first install the package by running pip install grpcio-tools according to the docs. But the purpose of setup.py is to automatically pull down dependencies like grpcio-tools.
So is there a best-practice for doing this? As in, how to generate code that depends on another python package from within setuptools? Am I better off just create a separate build.sh script that manually pip-installs and generates the code? Or should I expect users of the package to already have grpcio-tools installed?
As far as I know, the "current" best practice is:
pip manages dependencies
setup.py performs build
Executing "pip install ." is almost equivalent to perform "pip install -r requirements.txt" + "python setup.py build" + "python setup.py install".
This is a custom command that generates python sources from proto files:
class GrpcTool (Command):
def initialize_options(self):
pass
def finalize_options(self):
pass
def run(self):
import grpc_tools.protoc
proto_include = pkg_resources.resource_filename('grpc_tools', '_proto')
grpc_tools.protoc.main([
'grpc_tools.protoc',
'-I{}'.format(proto_include),
'--python_out=SOME_PATH/',
'--grpc_python_out=SOME_PATH/',
'SOME_PROTO.proto'
])
that is invoked customizing build_py command, like this:
class BuildPyCommand (build_py):
def run(self):
self.run_command('grpc')
super(BuildPyCommand, self).run()
Note the import inside the run method. It seems that pip run setup.py several times, both before and after having installed requirements. So if you have the import on top of file, the build fails.
Along with #makeroo approach, alternative way is to execute grpc_tools module as a subprocess.
The benefit of this approach is to receive a generation result for sure; 0 is a success and 1 for error.
proto_files = ["proto/file1.proto", "proto/file2.proto"]
import subprocess
for file in proto_files:
args = "--proto_path=. --python_out=. --grpc_python_out=. {0}".format(file)
result = subprocess.call("python -m grpc_tools.protoc " + args, shell=True)
print("grpc generation result for '{0}': code {1}".format(file, result))
Above code will create generated python files to proto directory where .proto files reside.

Get Gitlab's Continuous Integration to compile a Python extension written in C

Context
I have a Python project for which I wrap some C/C++ code (using the excellent PyBind library). I have a set of C and Python unit tests and I've configured Gitlab's CI to run them at each push.
The C tests use a minimalist unit test framework called minunit and I use Python's unittest suite.
Before running the C tests, all the C code is compiled and then tested. I'd like to also compile the C/C++ wrapper for Python before running the Python tests, but have a hard time to do it.
Question in a few words
Is there a standard/good way to get Gitlab-CI to build a Python extension using setuptools before running unit-tests?
Question with more words / Description of what I tried
To compile the C/C++ wrapper locally, I use setuptools with a setup.py file including a build_ext command.
I locally compile everything with python setup.py build_ext --inplace (the last arg --inplace will just copy the compiled file to the current directory).
As far as I know, this is quite standard.
What I tried to do on Gitlab is to have a Python script (code below) that will run a few commands using os.system command (which appears to be bad practice...).
The first command is to run a script building and running all C tests. This works but I'm happy to take recommendations (should I configure Gitlab CI to run C tests separately?).
Now, the problem comes when I try to build the C/C++ wrapper, with os.system("cd python/ \npython setup.py build_ext --inplace"). This generates the error
File "setup.py", line 1, in <module>
from setuptools import setup, Extension
ImportError: No module named setuptools
So I tried to modify my gitlab's CI configuration file to install python-dev. My .gitlab-ci.yml looks like
test:
script:
- apt-get install -y python-dev
- python run_tests.py
But, not being sudo on the gitlab's server, I get the following error E: Could not open lock file /var/lib/dpkg/lock - open (13: Permission denied).
Anyone knows a way around that, or a better way to tackle this problem?
Any help would be more than welcome!
run_tests.py file
import unittest
import os
from shutil import copyfile
import glob
class AllTests(unittest.TestCase):
def test_all(self):
# this automatically loads all tests in current dir
testsuite = unittest.TestLoader().discover('tests/Python_tests')
# run tests
result = unittest.TextTestRunner(verbosity=2).run(testsuite)
# send/print results
self.assertEqual(result.failures, [], 'Failure')
if __name__ == "__main__":
# run C tests
print(' ------------------------------------------------------ C TESTS')
os.system("cd tests/C_tests/ \nbash run_all.sh")
# now python tests
print(' ------------------------------------------------- PYTHON TESTS')
# first build and copy shared library compiled from C++ in the python test directory
# build lib
os.system("cd python/ \npython setup.py build_ext --inplace")
# copy lib it to right place
dest_dir = 'tests/Python_tests/'
for file in glob.glob(r'python/*.so'):
print('Copying file to test dir : ', file)
copyfile(file, dest_dir+file.replace('python/', ''))
# run Python tests
unittest.main(verbosity=0)
My suggestion would be moving the entire test running logic into the setup script.
using test command
First of all, setuptools ships a test command, so you can run the tests via python setup.py test. Even better, the test calls build_ext command under the hood and places the built extensions so that they accessible in the tests, so no need for you to invoke python setup.py build_ext explicitly:
$ python setup.py test
running test
running egg_info
creating so.egg-info
writing so.egg-info/PKG-INFO
writing dependency_links to so.egg-info/dependency_links.txt
writing top-level names to so.egg-info/top_level.txt
writing manifest file 'so.egg-info/SOURCES.txt'
reading manifest file 'so.egg-info/SOURCES.txt'
writing manifest file 'so.egg-info/SOURCES.txt'
running build_ext
building 'wrap_fib' extension
creating build
creating build/temp.linux-aarch64-3.6
aarch64-unknown-linux-gnu-gcc -pthread -fPIC -I/data/gentoo64/usr/include/python3.6m -c wrap_fib.c -o build/temp.linux-aarch64-3.6/wrap_fib.o
aarch64-unknown-linux-gnu-gcc -pthread -fPIC -I/data/gentoo64/usr/include/python3.6m -c cfib.c -o build/temp.linux-aarch64-3.6/cfib.o
creating build/lib.linux-aarch64-3.6
aarch64-unknown-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,--as-needed -L. build/temp.linux-aarch64-3.6/wrap_fib.o build/temp.linux-aarch64-3.6/cfib.o -L/data/gentoo64/usr/lib64 -lpython3.6m -o build/lib.linux-aarch64-3.6/wrap_fib.cpython-36m-aarch64-linux-gnu.so
copying build/lib.linux-aarch64-3.6/wrap_fib.cpython-36m-aarch64-linux-gnu.so ->
test_fib_0 (test_fib.FibonacciTests) ... ok
test_fib_1 (test_fib.FibonacciTests) ... ok
test_fib_10 (test_fib.FibonacciTests) ... ok
----------------------------------------------------------------------
Ran 3 tests in 0.002s
OK
(I used the code from the Cython Book examples repository to play with, but the output should be pretty similar to what PyBind produces).
using the extra keywords
Another feature that may come handy are the extra keywords setuptools adds: test_suite, tests_require, test_loader (docs). Here's an example of embedding a custom test suite as you do in run_tests.py:
# setup.py
import unittest
from Cython.Build import cythonize
from setuptools import setup, Extension
exts = cythonize([Extension("wrap_fib", sources=["cfib.c", "wrap_fib.pyx"])])
def pysuite():
return unittest.TestLoader().discover('tests/python_tests')
if __name__ == '__main__':
setup(
name='so',
version='0.1',
ext_modules=exts,
test_suite='setup.pysuite'
)
extending the test command
The last requirement is running C tests. We can embed them by overriding the test command and invoking some custom code from there. The advantage of that is that distutils offers a command API with many useful functions, like copying files or executing external commands:
# setup.py
import os
import unittest
from Cython.Build import cythonize
from setuptools import setup, Extension
from setuptools.command.test import test as test_orig
exts = cythonize([Extension("wrap_fib", sources=["cfib.c", "wrap_fib.pyx"])])
class test(test_orig):
def run(self):
# run python tests
super().run()
# run c tests
self.announce('Running C tests ...')
pwd = os.getcwd()
os.chdir('tests/C_tests')
self.spawn(['bash', 'run_all.sh'])
os.chdir(pwd)
def pysuite():
return unittest.TestLoader().discover('tests/python_tests')
if __name__ == '__main__':
setup(
name='so',
version='0.1',
ext_modules=exts,
test_suite='setup.pysuite',
cmdclass={'test': test}
)
I extended the original test command, running some extra stuff after the python unit tests finish (notice calling of an external command via self.spawn). All that is left is replacing the default test command with the custom one via passing cmdclass in the setup function.
Now you have everything collected in the setup script and python setup.py test will do all the dirty job.
But, not being sudo on the gitlab's server, I get the following error
I don't have any experience with Gitlab CI, but I can't imagine there is no possibility to install packages on the build server. Maybe this question will be helpful: How to use sudo in build script for gitlab ci?
If there really is no other option, you can bootstrap a local copy of setuptools with ez_setup.py. Note, however, that although this method still works, it was deprecated recently.
Also, if you happen to use a recent version of Python (3.4 and newer), then you should have pip bundled with Python distribution, so it should be possible to install setuptools without root permissions with
$ python -m pip install --user setuptools

How setup.py install npm module?

I implemented a python web client that I would like to test.
The server is hosted in npm registry. The server gets ran locally with node before running my functional tests.
How can I install properly the npm module from my setup.py script?
Here is my current solution inspired from this post:
class CustomInstallCommand(install):
def run(self):
arguments = [
'npm',
'install',
'--prefix',
'test/functional',
'promisify'
]
subprocess.call(arguments, shell=True)
install.run(self)
setup(
cmdclass={'install': CustomInstallCommand},
from setuptools.command.build_py import build_py
class NPMInstall(build_py):
def run(self):
self.run_command('npm install --prefix test/functional promisify')
build_py.run(self)
OR
from distutils.command.build import build
class NPMInstall(build):
def run(self):
self.run_command("npm install --prefix test/functional promisify")
build.run(self)
finally:
setuptools.setup(
cmdclass={
'npm_install': NPMInstall
},
# Usual setup() args.
# ...
)
Also look here
You are very close, Here is a simple function that does just that, you can remove "--global" option is you want to install the package for the current project only, keep in mind the the command shell=True could present security risks
import subprocess
def npm_install(args=["npm","--global", "install", "search-index"])
subprocess.Popen(args, shell=True)

Running custom setuptools build during install

I've tried to implement Compass compiling during setuptools' build, but the following code runs compilation during explicit build command and doesn't runs during install.
#!/usr/bin/env python
import os
import setuptools
from distutils.command.build import build
SETUP_DIR = os.path.dirname(os.path.abspath(__file__))
class BuildCSS(setuptools.Command):
description = 'build CSS from SCSS'
user_options = []
def initialize_options(self):
pass
def run(self):
os.chdir(os.path.join(SETUP_DIR, 'django_project_dir', 'compass_project_dir'))
import platform
if 'Windows' == platform.system():
command = 'compass.bat compile'
else:
command = 'compass compile'
import subprocess
try:
subprocess.check_call(command.split())
except (subprocess.CalledProcessError, OSError):
print 'ERROR: problems with compiling Sass. Is Compass installed?'
raise SystemExit
os.chdir(SETUP_DIR)
def finalize_options(self):
pass
class Build(build):
sub_commands = build.sub_commands + [('build_css', None)]
setuptools.setup(
# Custom attrs here.
cmdclass={
'build': Build,
'build_css': BuildCSS,
},
)
Any custom instructions at Build.run (e.g. some printing) doesn't apply during install too, but dist instance contains in commands attribute only my build command implementation instances. Incredible! But I think the trouble is in complex relations between setuptools and distutils. Does anybody knows how to make custom building run during install on Python 2.7?
Update: Found that install definitely doesn't calls build command, but it calls bdist_egg which runs build_ext. Seems like I should implement "Compass" build extension.
Unfortunatelly, I haven't found the answer. Seems like the ability to run post-install scripts correctly there's only at Distutils 2. Now you can use this work-around:
Update: Because of setuptools' stack checks, we should override install.do_egg_install, not run method:
from setuptools.command.install import install
class Install(install):
def do_egg_install(self):
self.run_command('build_css')
install.do_egg_install(self)
Update2: easy_install runs exactly bdist_egg command which is used by install too, so the most correct way (espetially if you want to make easy_install work) is to override bdist_egg command. Whole code:
#!/usr/bin/env python
import setuptools
from distutils.command.build import build as _build
from setuptools.command.bdist_egg import bdist_egg as _bdist_egg
class bdist_egg(_bdist_egg):
def run(self):
self.run_command('build_css')
_bdist_egg.run(self)
class build_css(setuptools.Command):
description = 'build CSS from SCSS'
user_options = []
def initialize_options(self):
pass
def finalize_options(self):
pass
def run(self):
pass # Here goes CSS compilation.
class build(_build):
sub_commands = _build.sub_commands + [('build_css', None)]
setuptools.setup(
# Here your setup args.
cmdclass={
'bdist_egg': bdist_egg,
'build': build,
'build_css': build_css,
},
)
You may see how I've used this here.

How to install python module as a command line application under windows?

I need to install a python module in the site packages that also will be used as a command line application. Suppose I have a module like:
app.py
def main():
print 'Dummy message'
if __name__ == '__main__':
main()
setup.py
import distutils
try:
from setuptools import setup
except ImportError:
from distutils.core import setup
if __name__ == '__main__':
setup(name = 'dummy',
version = '1.0',
packages = ['dummy'],
)
Creating the dist by:
setup.py sdist
Install:
setup.py install
And now I would like to use it as a command line application by opening the command window and typing just: dummy
Is it possible to create such application under windows without to carry out registering system pat variables and so on ...
You can use the options in setup.py to declare command line scripts. Please refer to this article. On Windows, the script will be created in "C:\Python26\Scripts" (if you didn't change the path) - lots of tools store their scripts there (e.g. "easy_install", "hg", ...).
Put the following in dummy.cmd:
python.exe -m dummy
Or is it dummy.app...
Oh well, it's one of those.

Categories