This a question about setuptools and creating a python distribution for a web application.
Context
It is a django project and we distribute it via a private pypi, to ourselves, not external users (yet). Currently we run python setup.py sdist to create the distribution and we upload that to our pypi. I suspect it is just a copy of the source code as a tar.gz. We checkout the source to do development. We don't install for dev via pip (perhaps we should?).
Recently the project has started using nodejs. Which now means that now we need to do a "build" to create new files which are not part of the source code, but do need to be deployed. The build process requires a bunch of node packages which are not a necessary, or desired, part of the final deployment to a webserver.
Reading through the packaging documentation it describes sdist, bdist_wheel, and also develop targets for setup.py, but I'm not clear on how to apply these to our situation.
Question 1
When we pull from our pypi, what should we be pulling?
the pre-built deployable version of code for the webserver, without source code?, or
code on which we run something else which builds and arranges things as they need to be? If so, what would the appropriate setup.py target be (sdist, bdist_wheel, a new custom target)?
Question 1 (put another way)
When, and how, should we run the build?
To 'build' the app, we run nodejs-javascript scripts eg: npm install, and less x y x, or webpack -p, or python, scripts which convert the source files into deployable files. We don't want these things on the web server.
Discussion
So, generally speaking, as you can tell, first we're a bit confused about when to convert the source files into distribution files. And second, how to actually implement that: subprocess calls to shell scripts from a subclassed setuptools.command?!?
We'd prefer not to push the finished product to pypi because then all the bits and bobs we do to create the build are not captured and automated in a neat setuptools centric way. So when, where, and how do we incorporate the build process into our setuptools distribution?
* Note: adding more confusion about what/where we're also using tox
Related
Edit: I am self taught, I don't know the right terms.
This question is probably a duplicate but I am not able to find it. I need to include a python pip package in my application say numpy. I don't want the user to pip install -r requirements.txt I want to include the module when the user downloads the application.
you don't have to include pip when you build the app, because when you build the app, all depencies will be included in the app resources, i am not sure which app you are building, but in general, all depencies will be converted to pyc packages when building the app and compiled
It's a whole world to explore, honestly ) you need to explore the subjects of deployment and distribution; those will depend on target operating systems... I would suggest investigating Docker, which allows you to package the OS (some Unix), runtime and dependencies into one "thing".
Vendoring
The only way I can think of besides pip to do this with pure source code would be to vendor the code you need. This means downloading the source code of the package (like numpy) and including it in your project.
This comes with many potential issues including:
licensing issues
issues with installation if there are cpython files that need to be compiled
Having to manually update the files when new releases come out
increased download size for your source code
etc.
I would not recommend doing this unless there is a hard technical requirement for it, because it's a real hassle to deal with. If there is something in particular besides having to run the extra command as to a reason why you want to avoid pip it might help to better address this question.
Binary distribution
Also depending on the app you could look into something like pyinstaller to make a single .exe or binary file out of your app so they don't even need python or your dependencies, but be warned this has it's own set of complexities to look out for like having to build for every target platform (Windows, Mac and Linux).
I have a python app with its setup.py that's working just fine to install it through setuptools. I am then packaging it up in DEB and PKGNG using the excellent Effing package management. I've also made some quick tests with setuptools-pkg and that seems to work too.
Now I have a need to distribute the packages including init scripts to start/stop/manage the service. I have my init scripts in the source repo and, according to what seems to be best practice, I'm not doing anything with them in setuptools and I'm handling them in the os-specific packaging: for debian-based systems I use the --deb-init, --deb-upstart and --deb-systemd FPM options as needed.
How can I build a FreeBSD package that includes the correct rc.d script, using FPM or through any other means?
All the examples I've seen are adding the rc.d script when building a package through the ports collection but this is an internal app and is not going to be published to the Ports or on PyPi. I want to be able to check out the repository on a FreeBSD system, launch a command that gives me a package, distribute it to other FreeBSD systems, install it using pkg and have my init script correctly deployed to /usr/local/etc/rc.d/<myappname>. There's no need to keep using FPM for that, anything works as long as it gives me a well-formed package.
I would highly suggest creating your package as if it were any other port either if is going to be published or not.
One of the advantages you can inherit by doing this is that you could also include all your test and automate the deployment having out of the box the base for a continues integration/delivery setup.
Check out poudriere. You could indeed maintain a set of custom ports with your very own settings and distribute them across your environments without any hassle:
pkg install -r your-poudriere yourpkg
In case this is probably too much or probably doesn't adapt well to your use case, you can always fallback to ansible, in where you could create a custom rc.d within a template of an ansible role.
If you just want to build and deploy something, let's say a microservice, then probably pkg is not the best tool, maybe you just need a supervisor that can work on all your platforms (sysutils/immortal) so that you could just distribute your code and have a single recipe for starting/stoping the service.
nbari's answer is probably the Right Way™ to do this and I'd probably create my own "port" and use that to build the package on a central host.
At the time of my original question I had taken a different approach that I'm reporting here for the sake of completeness.
I am still building the applications package (ie. myapp-1.0.0.txz) with fpm -s python -t freebsd, which basically uses Python's setuptools infrastructure to get the necessary informations, and I don't include any rc.d file in it.
I also build a second package which I will call myapp-init-1.0.0.txz with the source directory type (ie. fpm -s dir -t freebsd) and I only include the init script in that package.
Both packages get distributed to hosts and installed, thus solving my distribution issue.
I'm sure this has been answered elsewhere, but I must not know the right keywords to find the answer...
I'm working on a site that requires several different components deployed on different servers but relying on some shared functions. I've implemented this by putting the shared functions into a pip module in its own git repo that I can put into the requirements.txt file of each project.
This is pretty standard stuff - more or less detailed here:
https://devcenter.heroku.com/articles/python-pip
My question is now that I have this working to deploy code into production, how do I set up my dev environment in such a way that I can make edits to the code in the shared module without having to do all of the following?
1. Commit changes
2. increment the version in setup.py in shared library
3. Increment in requirements.txt
4. pip install -r requirements.txt
That's a lot of steps to do all over again if I make one typo.
On a similar note, I used jenkins with git hook and a simple(4 or 5 lines maybe that would install/upgrade requirements.txt, restart webserver and little more stuff) bash script. When I commit changes, jenkins would run my bash script, then voila. Almost instant upgrade.
But note that, this is hack-ish. Jenkins is a continuous integration tool focusing building and testing, and there are probably better and simpler tools in this case, hint: Continuous Integration.
I am writing a CLI python application that has dependencies on a few libraries (Paramiko etc.).
If I download their source and just place them under my main application source, I can import them and everything works just fine.
Why would I ever need to run their setup.py installers or deal with python package managers?
I understand that when deploying server side applications it is OK for an admin to run easy_install/pip commands etc to install the prerequsites, but for a script like CLI apps that have to be distributed as a self-contained apps that only depend on a python binary, what is the recommented approach?
Several reasons:
Not all packages are pure-python packages. It's easy to include C-extensions in your package and have setup.py automate the compilation process.
Automated dependency management; dependencies are declared and installed for you by the installer tools (pip, easy_install, zc.buildout). Dependencies can be declared dynamically too (try to import json, if that fails, declare a dependency on simplejson, etc.).
Custom resource installation setups. The installation process is highly configurable and dynamic. The same goes for dependency detection; the cx_Oracle has to jump through quite some hoops to make installation straightforward with all the various platforms and quirks of the Oracle library distribution options it needs to support, for example.
Why would you still want to do this for CLI scripts? That depends on how crucial the CLI is to you; will you be maintaining this over the coming years? Then I'd still use a setup.py, because it documents what the dependencies are, including minimal version needs. You can add tests (python setup.py test), and deploy to new locations or upgrade dependencies with ease.
In this case, the application consists of one or more Python files, plus a settings.ini file. Now the Python files when being installed need to be installed in ~/.hg (as default) or prompted where the user want them installed. The installation also requires text to be appended to files like hgrc.
Is there already a specific Python package that does all of this, or if anyone has any experience in this area please share.
As far as I have looked, Python packaging refers to setuptools and easy_install.
The basis for packaging is a setup.py file. A problem with this is that such a setup file is used for a couple of dissimilar tasks:
Generating documentation.
Creating a release (source/binary).
Actually installing the software.
Combining these tasks in one file is a bit of a hazard and leads to problems now and then.
or distutils, but I am not sure if these packages support the notion of user prompting and deployment like appending text to existing files, and creating new ones.
I would include a custom script (bin/ command) which will poke the users' .hgrc and others. Doing it without the user consent would be rude.
User story
Install package: easy_install pkgname and this deploys myproject-init-hg (UNIX executable, can be also written in Python)
The installation finished and tells the user to run commmand myproject-init-hg
setup.py include mechanism to distribute and deploy bin/ style scripts.