I use nunjucks for templating the frontend in a python project. Nunjucks templates must be precompiled in production. I don't use extensions or asynchronous filters in the nunjucks templates. Rather than use grunt-task to listen for changes to my templates, I prefer to use the nunjucks-precompile command (offered via npm) to sweep the entire templates directory into templates.js.
The idea is to have the nunjucks-precompile --include ["\\.tmpl$"] path/to/templates > templates.js command execute within setup.py so I can simply piggyback our deployer scripts' regular execution.
I found a setuptools override and a distutils scripts argument might serve the right purpose, but I'm not so sure either is the simplest approach to execution.
Another approach is to use subprocess to execute the command directly within setup.py, but I've been cautioned against this (rather preemptively IMHO). I don't really deeply understand why not.
Any ideas? Affirmations? Confirmations?
Update (04/2015): - If you don't have the nunjucks-precompile command available, simply use Node Package Manager to install nunjucks like so:
$ npm install nunjucks
Pardon the quick self-answer. I hope this helps someone out there in the ether. I want to share this now that I've worked out a solution I'm satisfied with.
Here's a solution that's safe and based on Peter Lamut's write-up. Note that this does not use shell=True in the subprocess invocation. You may bypass grunt-task requirements on your python deployment system and also use this for obfuscation and JS packaging all the same.
from setuptools import setup
from setuptools.command.install import install
import subprocess
import os
class CustomInstallCommand(install):
"""Custom install setup to help run shell commands (outside shell) before installation"""
def run(self):
dir_path = os.path.dirname(os.path.realpath(__file__))
template_path = os.path.join(dir_path, 'src/path/to/templates')
templatejs_path = os.path.join(dir_path, 'src/path/to/templates.js')
templatejs = subprocess.check_output([
'nunjucks-precompile',
'--include',
'["\\.tmpl$"]',
template_path
])
f = open(templatejs_path, 'w')
f.write(templatejs)
f.close()
install.run(self)
setup(cmdclass={'install': CustomInstallCommand},
...
)
I think that the link here encapsulates what you are trying to achieve.
Related
I am trying to create a python package (deb & rpm) from cmake, ideally using cpack. I did read
https://cmake.org/cmake/help/latest/cpack_gen/rpm.html and,
https://cmake.org/cmake/help/latest/cpack_gen/deb.html
The installation works just fine (using component install) for my shared library. However I cannot make sense of the documentation to install the python binding (glue) code. Using the standard cmake install mechanism, I tried:
install(
FILES __init__.py library.py
DESTINATION ${ACME_PYTHON_PACKAGE_DIR}/project_name
COMPONENT python)
And then using brute-force approach ended-up with:
# debian based package (relative path)
set(ACME_PYTHON_PACKAGE_DIR lib/python3/dist-packages)
and
# rpm based package (full path required)
set(ACME_PYTHON_PACKAGE_DIR /var/lang/lib/python3.8/site-packages)
The above is derived from:
debian % python -c 'import site; print(site.getsitepackages())'
['/usr/local/lib/python3.9/dist-packages', '/usr/lib/python3/dist-packages', '/usr/lib/python3.9/dist-packages']
while:
rpm % python -c 'import site; print(site.getsitepackages())'
['/var/lang/lib/python3.8/site-packages']
It is pretty clear that the brute-force approach will not be portable, and is doomed to fail on the next release of python. The only possible solution that I can think of is generating a temporary setup.py python script (using setuptools), that will do the install. Typically cmake would call the following process:
% python setup.py install --root ${ACME_PYTHON_INSTALL_ROOT}
My questions are:
Did I understand the cmake/cpack documentation correctly for python package ? If so this means I need to generate an intermediate setup.py script.
I have been searching through the cmake/cpack codebase (git grep setuptools) but did not find helper functions to handle generation of setup.py and passing the result files back to cpack. Is there an existing cmake module which I could re-use ?
I did read, some alternative solution, such as:
How to build debian package with CPack to execute setup.py?
Which seems overly complex, and geared toward Debian-only based system. I need to handle RPM in my case.
As mentionned in my other solution, the ugly part is dealing with absolute path in cmake install() commands. I was able to refactor the code to avoid usage of absolute path in install(). I simply changed the installation into:
install(
# trailing slash is important:
DIRECTORY ${SETUP_OUTPUT}/
# "." syntax is a reliable mechanism, see:
# https://gitlab.kitware.com/cmake/cmake/-/issues/22616
DESTINATION "."
COMPONENT python)
And then one simply needs to:
set(CMAKE_INSTALL_PREFIX "/")
set(CPACK_PACKAGING_INSTALL_PREFIX "/")
include(CPack)
At this point all install path now need to include explicitely /usr since we've cleared the value for CMAKE_INSTALL_PREFIX.
The above has been tested for deb and rpm packages. CPACK_BINARY_TGZ does properly run with the above solution:
https://gitlab.kitware.com/cmake/cmake/-/issues/22925
I am going to post the temporary solution I am using at the moment, until someone provide something more robust.
So I eventually manage to stumble upon:
https://alioth-lists.debian.net/pipermail/libkdtree-devel/2012-October/000366.html and,
Using CMake with setup.py
Re-using the above to do an install step instead of a build step can be done as follow:
find_package(Python COMPONENTS Interpreter)
set(SETUP_PY_IN "${CMAKE_CURRENT_SOURCE_DIR}/setup.py.in")
set(SETUP_PY "${CMAKE_CURRENT_BINARY_DIR}/setup.py")
set(SETUP_DEPS "${CMAKE_CURRENT_SOURCE_DIR}/project_name/__init__.py")
set(SETUP_OUTPUT "${CMAKE_CURRENT_BINARY_DIR}/build-python")
configure_file(${SETUP_PY_IN} ${SETUP_PY})
add_custom_command(
OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/setup_timestamp
COMMAND ${Python_EXECUTABLE} ARGS ${SETUP_PY} install --root ${SETUP_OUTPUT}
COMMAND ${CMAKE_COMMAND} -E touch ${CMAKE_CURRENT_BINARY_DIR}/setup_timestamp
DEPENDS ${SETUP_DEPS})
add_custom_target(target ALL DEPENDS ${CMAKE_CURRENT_BINARY_DIR}/setup_timestamp)
And then the ugly part is:
install(
# trailing slash is important:
DIRECTORY ${SETUP_OUTPUT}/
DESTINATION "/" # FIXME may cause issues with other cpack generators
COMPONENT python)
Turns out that the documentation for install() is pretty clear about absolute paths:
https://cmake.org/cmake/help/latest/command/install.html#introduction
DESTINATION
[...]
As absolute paths are not supported by cpack installer generators,
it is preferable to use relative paths throughout.
For reference, here is my setup.py.in:
from setuptools import setup
if __name__ == '__main__':
setup(name='project_name_python',
version='${PROJECT_VERSION}',
package_dir={'': '${CMAKE_CURRENT_SOURCE_DIR}'},
packages=['project_name'])
You can be fancy and remove the __pycache__ folder using the -B flag:
COMMAND ${Python_EXECUTABLE} ARGS -B ${SETUP_PY} install --root ${SETUP_OUTPUT}
You can be extra fancy and add debian option such as:
if(CPACK_BINARY_DEB)
set(EXTRA_ARG "--install-layout" "deb")
endif()
use as:
COMMAND ${Python_EXECUTABLE} ARGS -B ${SETUP_PY} install --root ${SETUP_OUTPUT} ${EXTRA_ARG}
Note: distutils is deprecated and the accepted answer has been updated to use setuptools
I'm trying to add a post-install task to Python distutils as described in How to extend distutils with a simple post install script?. The task is supposed to execute a Python script in the installed lib directory. This script generates additional Python modules the installed package requires.
My first attempt is as follows:
from distutils.core import setup
from distutils.command.install import install
class post_install(install):
def run(self):
install.run(self)
from subprocess import call
call(['python', 'scriptname.py'],
cwd=self.install_lib + 'packagename')
setup(
...
cmdclass={'install': post_install},
)
This approach works, but as far as I can tell has two deficiencies:
If the user has used a Python interpreter other than the one picked up from PATH, the post install script will be executed with a different interpreter which might cause a problem.
It's not safe against dry-run etc. which I might be able to remedy by wrapping it in a function and calling it with distutils.cmd.Command.execute.
How could I improve my solution? Is there a recommended way / best practice for doing this? I'd like to avoid pulling in another dependency if possible.
The way to address these deficiences is:
Get the full path to the Python interpreter executing setup.py from sys.executable.
Classes inheriting from setuptools.Command (such as setuptools.command.install.install which we use here) implement the execute method, which executes a given function in a "safe way" i.e. respecting the dry-run flag.
Note however that the --dry-run option is currently broken and does not work as intended anyway.
I ended up with the following solution:
import os, sys
from setuptools import setup
from setuptools.command.install import install as _install
def _post_install(dir):
from subprocess import call
call([sys.executable, 'scriptname.py'],
cwd=os.path.join(dir, 'packagename'))
class install(_install):
def run(self):
_install.run(self)
self.execute(_post_install, (self.install_lib,),
msg="Running post install task")
setup(
...
cmdclass={'install': install},
)
Note that I use the class name install for my derived class because that is what python setup.py --help-commands will use.
I think the easiest way to perform the post-install, and keep the requirements, is to decorate the call to setup(...):
from setup tools import setup
def _post_install(setup):
def _post_actions():
do_things()
_post_actions()
return setup
setup = _post_install(
setup(
name='NAME',
install_requires=['...
)
)
This will run setup() when declaring setup. Once done with the requirements installation, it will run the _post_install() function, which will run the inner function _post_actions().
The distutils module allows to include and install resource files together with Python modules. How to properly include them if resource files should be generated during a building process?
For example, the project is a web application which contains CoffeeScript sources that should be compiled into JavaScript and included in a Python package then. Is there a way to integrate this into a normal sdist/bdist process?
I spent a fair while figuring this out, the various suggestions out there are broken in various ways - they break installation of dependencies, or they don't work in pip, etc. Here's my solution:
in setup.py:
from setuptools import setup, find_packages
from setuptools.command.install import install
from distutils.command.install import install as _install
class install_(install):
# inject your own code into this func as you see fit
def run(self):
ret = None
if self.old_and_unmanageable or self.single_version_externally_managed:
ret = _install.run(self)
else:
caller = sys._getframe(2)
caller_module = caller.f_globals.get('__name__','')
caller_name = caller.f_code.co_name
if caller_module != 'distutils.dist' or caller_name!='run_commands':
_install.run(self)
else:
self.do_egg_install()
# This is just an example, a post-install hook
# It's a nice way to get at your installed module though
import site
site.addsitedir(self.install_lib)
sys.path.insert(0, self.install_lib)
from mymodule import install_hooks
install_hooks.post_install()
return ret
Then, in your call to the setup function, pass the arg:
cmdclass={'install': install_}
You could use the same idea for build as opposed to install, write yourself a decorator to make it easier, etc. This has been tested via pip, and direct 'python setup.py install' invocation.
The best way would be to write a custom build_coffeescript command and make it a subcommand of build. More details are given in other replies to similar/duplicate questions, for example this one:
https://stackoverflow.com/a/1321345/150999
I am distributing a package that has this structure:
mymodule:
mymodule/__init__.py
mymodule/code.py
scripts/script1.py
scripts/script2.py
The mymodule subdir of mymodule contains code, and the scripts subdir contains scripts that should be executable by the user.
When describing a package installation in setup.py, I use:
scripts=['myscripts/script1.py']
To specify where scripts should go. During installation they typically go in some platform/user specific bin directory. The code that I have in mymodule/mymodule needs to make calls to the scripts though. What is the correct way to then find the full path to these scripts? Ideally they should be on the user's path at that point, so if I want to call them out from the shell, I should be able to do:
os.system('script1.py args')
But I want to call the script by its absolute path, and not rely on the platform specific bin directory being on the PATH, as in:
# get the directory where the scripts reside in current installation
scripts_dir = get_scripts_dir()
script1_path = os.path.join(scripts_dir, "script1.py")
os.system("%s args" %(script1_path))
How can this be done? thanks.
EDIT removing the code outside of a script is not a practical solution for me. the reason is that I distribute jobs to a cluster system and the way I usually do it is like this: imagine you have a set of tasks you want to run on. I have a script that takes all tasks as input and then calls another script, which runs only on the given task. Something like:
main.py:
for task in tasks:
cmd = "python script.py %s" %(task)
execute_on_system(cmd)
so main.py needs to know where script.py is, because it needs to be a command executable by execute_on_system.
I think you should structure your code so that you don't need to call scripts from you code. Move code you need from scripts to your package and then you can call this code both from your scripts and from your code.
My use case for this was to check that the directory my scripts are installed into is in the user's path and give a warning if not (since it is often not in the path if installing with --user). Here is the solution I came up with:
from setuptools.command.easy_install import easy_install
class my_easy_install( easy_install ):
# Match the call signature of the easy_install version.
def write_script(self, script_name, contents, mode="t", *ignored):
# Run the normal version
easy_install.write_script(self, script_name, contents, mode, *ignored)
# Save the script install directory in the distribution object.
# This is the same thing that is returned by the setup function.
self.distribution.script_install_dir = self.script_dir
...
dist = setup(...,
cmdclass = {'build_ext': my_builder, # I also have one of these.
'easy_install': my_easy_install,
},
)
if dist.script_install_dir not in os.environ['PATH'].split(':'):
# Give a sensible warning message...
I should point out that this is for setuptools. If you use distutils, then the solution is similar, but slightly different:
from distutils.command.install_scripts import install_scripts
class my_install_scripts( install_scripts ): # For distutils
def run(self):
install_scripts.run(self)
self.distribution.script_install_dir = self.install_dir
dist = setup(...,
cmdclass = {'build_ext': my_builder,
'install_scripts': my_install_scripts,
},
)
I think the correct solution is
scripts=glob("myscripts/*.py"),
I'm looking for the most elegant way to notify users of my library that they need a specific unix command to ensure that it will works...
When is the bet time for my lib to raise an error:
Installation ?
When my app call the command ?
At the import of my lib ?
both?
And also how should you detect that the command is missing (if not commands.getoutput("which CommandIDependsOn"): raise Exception("you need CommandIDependsOn")).
I need advices.
IMO, the best way is to check at install if the user has this specific *nix command.
If you're using distutils to distribute your package, in order to install it you have to do:
python setup.py build
python setup.py install
or simply
python setup.py install (in that case python setup.py build is implicit)
To check if the *nix command is installed, you can subclass the build method in your setup.py like this :
from distutils.core import setup
from distutils.command.build import build as _build
class build(_build):
description = "Custom Build Process"
user_options= _build.user_options[:]
# You can also define extra options like this :
#user_options.extend([('opt=', None, 'Name of optionnal option')])
def initialize_options(self):
# Initialize here you're extra options... Not needed in your case
#self.opt = None
_build.initialize_options(self)
def finalize_options(self):
# Finalize your options, you can modify value
if self.opt is None :
self.opt = "default value"
_build.finalize_options(self)
def run(self):
# Extra Check
# Enter your code here to verify if the *nix command is present
.................
# Start "classic" Build command
_build.run(self)
setup(
....
# Don't forget to register your custom build command
cmdclass = {'build' : build},
....
)
But what if the user uninstall the required command after the package installation? To solve this problem, the only "good" solution is to use a packaging systems such as deb or rpm and put a dependency between the command and your package.
Hope this helps
I wouldn't have any check at all. Document that your library requires this command, and if the user tries to use whatever part of your library needs it, an exception will be raised by whatever runs the command. It should still be possible to import your library and use it, even if only a subset of functionality is offered.
(PS: commands is old and broken and shouldn't be used in new code. subprocess is the hot new stuff.)