Organizing multiple deployable apps with shared code in a single repo - python

I have a repo with multiple sub-directories. Each sub-directory contains an app that's deployable to Heroku. To push the code off, I use git subtree, as suggested here.
This has worked well, and I want to continue in this direction by adding versioned APIs as deployable apps. The problem is that these APIs share substantial amounts of code.
Is there a way to have shared code in this setup, such that the apps are still deployable and the shared code is contained in the repo? The APIs are written in python, so something pip/virtualenv-specific would work.
Allegedly submodules work, but I'd prefer to avoid them as they've left a bad experience.

Never got an answer, so here's what I did.
I continued to use subtree instead of submodules. Then for each sub-project, I created two requirements.txt for pip: one for Heroku, one for running a sub-project locally. They look the same, minus the shared lib.
In the Heroku-based requirements.txt, the shared lib is specified like this:
-e git+https://username:password#github.com/org/repo.git#egg=sharedlib&subdirectory=sharedlib
This tells pip to pull the subdirectory sharedlib off repo from github.
In the requirements.txt for running the project locally, the shared lib is specified like this:
-e ../sharedlib
This tells pip to pull the localy subdirectory sharedlib from the repo.
Hope this helps!

Related

Trying to make a development instance for a Python pyramid project

So I have this Python pyramid-based application, and my development workflow has basically just been to upload changed files directly to the production area.
Coming close to launch, and obviously that's not going to work anymore.
I managed to edit the connection strings and development.ini and point the development instance to a secondary database.
Now I just have to figure out how to create another copy of the project somewhere where I can work on things and then make the changes live.
At first, I thought that I could just make a copy of the project directory somewhere else and run it with different arguments pointing to the new location. That didn't work.
Then, I basically set up an entirely new project called myproject-dev. I went through the setup instructions:
I used pcreate, and then setup.py develop, and then I copied over my development.ini from my project and carefully edited the various references to myproject-dev instead of myproject.
Then,
initialize_myproject-dev_db /var/www/projects/myproject/development.ini
Finally, I get a nice pyramid welcome page that everything is working correctly.
I thought at that point I could just blow out everything in the project directory and copy over the main project files, but then I got that feeling in the pit of my stomach when I noticed that a lot of things weren't working, like static URLs.
Apparently, I'm referencing myproject in includes and also static URLs, and who knows where else.
I don't think this idea is going to work, so I've given up for now.
Can anyone give me an idea of how people go about setting up a development instance for a Python pyramid project?
The first thing you should do, if it's not the case, is version control your project. I'd recommend using git.
In addition to the benefits of managing the changes made to the application when developing, it will aldo make it easier to share copies between developers... or with the production deployment. Indeed, production can just be a git clone of the project, just like your development instance.
The second thing is you need to install the project in your Python library path. This is how all the imports and includes are going to work.
I'd recommend creating a virtual environment for this, with either virtualenv or pew, so that your app (and its dependencies) are "isolated" from the rest of your system and other apps.
You probably have a setup.py script in your project. If not, create one. Then install your project with pip install . in production, or pip install -e . in development.
Here's how I managed my last Pyramid app:
I had both a development.ini and a production.ini. I actually had a development.local.ini in addition to the other two - one for local development, one for our "test" system, and one for production. I used git for version control, and had a main branch for production deployments. On my prod server I created the virtual environment, etc., then would pull my main branch and run using the production.ini config file. Updates basically involved jumping back into the virtualenv and pulling latest updates from the repo, then restarting the pyramid server.

Python Development in multiple repositories

We are trying to find the best way to approach that problem.
Say I work in a Python environment, with pip & setuptools.
I work in a normal git flow, or so I hope.
So:
Move to feature branch in some app, make changes.
Move to feature branch in a dependent lib - Develop thing.
Point the app, using "-e git+ssh" to the feature branch of the dependent lib.
Create a Pull Request.
When this is all done, I want to merge stuff to master, but I can't without making yet another final change to have the app (step 3 above) requirements.txt now point to the main branch of the feature.
Is there any good workflow for "micro services" or multiple dependent source codes in python that we are missing?
Python application workflow from development to deployment
It looks like you are in search for developing Python application, using git.
Following description is applicable to any kind of Python based application,
not only to Pyramid based web ones.
Requirements
Situation:
developing Python based solution using Pyramid web framework
there are multiple python packages, participating in final solution, packages might be dependent.
some packages come from public pypi, others might be private ones
source code controlled by git
Expectation:
proposed working style shall allow:
pull requests
shall work for situations, where packages are dependent
make sure, deployments are repeatable
Proposed solution
Concepts:
even the Pyramid application released as versioned package
for private pypi use devpi-server incl. volatile and release indexes.
for package creation, use pbr
use tox for package unit testing
test, before you release new package version
test, before you deploy
keep deployment configuration separate form application package
Pyramid web app as a package
Pyramid allows creation of applications in form of Python package. In
fact, whole initial tutorial (containing 21 stages) is using exactly this
approach.
Despite the fact, you can run the application in develop mode, you do not have
to do so in production. Running from released package is easy.
Pyramid uses nice .ini configuration files. Keep development.ini in the
package repository, as it is integral part for development.
On the other hand, make sure, production .ini files are not present as they
should not mix with application and belong to deployment stuff.
To make deployment easier, add into your package a command, which prints to
stdout typical deployment configuration. Name the script e.g. myapp_gen_ini.
Write unittests and configure tox.ini to run them.
Keep deployment stuff separate from application
Mixing application code with deployment configurations will make problem at
the moment, you will have to install to second instance (as you are likely to
change at least one line of your configuration).
In deployment repository:
keep here requirements.txt, which lists the application package and other
packages needed for production. Be sure you specify exact package version at
least for your application package.
keep here production.ini file. If you have more deployments, use one branch per deployment.
put here tox.ini
tox.ini shall have following content:
[tox]
envlist = py27
# use py34 or others, if your prefer
[testenv]
commands =
deps =
-rrequirements.txt
Expected use of deployment respository is:
clone it to the server
run tox, this will create virtualenv .tox/py27
activate the virtualenv by $ source .tox/py27/bin/activate
if production.ini does not exist in the repo yet, run command
$ myapp_gen_ini > production.ini to generate template for production
configuration
edit the production.ini as needed.
test, it works.
commit the production.ini changes to the repository
do other stuff needed to deploy the app (configure web server, supervisord etc.)
For setup.py use pbr package
To make package creation simpler, and to keep package versioning related to git
repository tags, use pbr. You will end up with setup.py being only 3 lines
long and all relevant stuff will be specified in setup.cfg in form of ini
file.
Before you build the first time, you have to have some files in git repository,
otherwise it will complain. As you use git, this shall be no problem.
To assign new package version, set $ git tag -a 0.2.0 and build it. This will
create the package with version 0.2.0.
As a bonus, it will create AUTHORS and ChangeLog based on your commit
messages. Keep these files in .gitignore and use them to create AUTHORS.rst
and ChangeLog.rst manually (based on autogenerated content).
When you push your commits to another git repository, do not forget to push the tags too.
Use devpi-server as private pypi
devpi-server is excellent private pypi, which will bring you following advantages:
having private pypi at all
cached public pypi packages
faster builds of virtual environments (as it will install from cached packages)
being able to use pip even without having internet connectivity
pushing between various types of package indexes: one for development
(published version can change here), one for deployment (released version will not change here).
simple unit test run for anyone having access to it, and it will even collect
the results and make them visible via web page.
For described workflow it will contribute as repository of python packages, which can be deployed.
Command to use will be:
$ devpi upload to upload developed package to the server
$ devpi test <package_name> to download, install, run unit test,
publish test results to devpi-server and clean up temporary installation.
$ devpi push ... to push released package to proper index on devpi-server or even on public pypi.
Note, that all the time it is easy to have pip command configured to consume
packages from selected index on devpi server for $ pip install <package>.
devpi-server is also ready for use in continuous integration testing.
How git fits into this workflow
Described workflow is not bound to particular style of using git.
On the other hand, git can play it's role in following situations:
commit: commit message will be part of autogenerated ChangeLog
tag: defines versions (recognized by setup.py based on pbr).
As git is distributed, having multiple repositories, branches etc.,
devpi-server allows similar distribution as each user can have it's own
working index to publish to. Anyway, finally there will be one git repository
with master branch to use. In devpi-server will be also one agreed
production index.
Summary
Described process is not simple, but the complexity is relevant to complexity of the task.
It is based on tools:
tox
devpi-server
pbr (Python package)
git
Proposed solution allows:
managing python packages incl. release management
unit testing and continuous integration testing
any style of using git
deployment and development having clearly defined scopes and interactions.
Your question assumes multiple repositories. Proposed solution allows decoupling multiple repositories by means of well managed package versions, published to devpi-server.
We ended up using git dependencies and not devpi.
I think when git is used, there is no need to add another package repository as long as pip can use this.
The core issue, where the branch code (because of a second level dependency) is different from the one merged to master is not solved yet, instead we work around that by working to remove that second level dependency.

build/deploy script in product repo or in external repo?

I am working on a web project written in python. Here are some facts:
Project hosted on GitHub
fabric (a python library for build and deploy automation) script (fabfile.py) for automatic build and deploy
Jenkins for build automation
My question is, where to put the fabfile.py conventionally.
I prefer to put the fabfile.py in the root of the project repo so that I can config jenkins job to grab the source code from git and simply run fab build to get the compiled package.
Someone insists that the fabfile.py should NOT be part of the project repo, it should be kept in an external repo instead. In this case you need to config jenkins to clone the fabfile repo, invoke git to clone the product repo then run packaging.
I know this is probably a matter of personal flavor, but are there any benefits to put fabfile.py in a separate repo than to put it with the product code?
Any suggestions are appreciated.
In my opinion, I can't see any benefits besides maybe preventing some junior dev accidentally deploying some unwanted code.
On the other hand, it's nice to have everything in one repo so you don't have to maintain multiple repositories. In past experiences, we always included deployment scripts in the root of the project.

Reusable Django apps + Ansible provisioning

I'm a long-time Django developer and have just started using Ansible, after using Vagrant for the last 18 months. Historically I've created a single VM for development of all my projects, and symlinked the reusable Django apps (Python packages) I create, to the site-packages directory.
I've got a working dev box for my latest Django project, but I can't really make changes to my own reusable apps without having to copy those changes back to a Git repo. Here's my ideal scenario:
I checkout all the packages I need to develop as Git submodules within the site I'm working on
I have some way (symlinking or a better method) to tell Ansible to setup the box and install my packages from these Git submodules
I run vagrant up or vagrant provision
It reads requirements.txt and installs the remaining packages (things like South, Pillow, etc), but it skips my set of tools because it knows they're already installed
I hope that makes sense. Basically, imagine I'm developing Django. How do I tell Vagrant (via Ansible I assume) to find my local copy of Django, rather than the one from PyPi?
Currently the only way I can think of doing this is creating individual symlinks for each of those packages I'm developing, but I'm sure there's a more sensible model.
Thanks!
You should probably think of it slightly differently. You create a Vagrant file which specifies Ansible as a provisioner. In that Vagrant file you also specify what playbook to use for your vagrant provision portion.
If your playbooks are written in an idempotent way, running them multiple times will skip steps that already match the desired state.
You should also think about what your desired end-state of a VM should look like and write playbooks to accomplish that. Unless I'm misunderstanding something, all your playbook actions should be happening inside of VM, not directly on your local machine.

Git, Django and pluggable applications

I'm currently developing several websites on Django, which requiere several Django Apps. Lets say I have two Django projects: web1 and web2 (the websites, each has a git repo). Both web1 and web2 have a different list of installed apps, but happen to both use one (or more) application(s) developed by me, say "MyApp" (also has a git repo there). My questions are:
What is the best way to decouple MyApp from any particular website? What I want is to develop MyApp independently, but have each website use the latest version of the app (if it has it installed, of course). I have two "proposed" solutions: use symlinks on each website to a "master" MyApp folder, or use git to push from MyApp to each website's repo.
How to deploy with this setup? Right now, I can push the git repo of web1 and web2 to a remote repo in my shared hosting account, and it works like a charm. Will this scale adequately?
I think I have the general idea working in my head, but I'm not sure about the specifics. Won't this create a nested git repo issue? How does git deal with simlinks, specifically if the symlink destination has a .git folder in it?
The way I work:
Each website has it's own git repo and each app has it's own repo. Each website also has it's own virtualenv and requirements.txt. Even though 2 websites may share the most recent version of MyApp right now, they may not in the future (maybe you haven't gotten one of the websites up to date on some API changes).
If you really must have just one version of MyApp, you could install it at the system level and then symlink it into the virtualenv for each project.
For development on a local machine (not production) I do it a little differently. I symlink the app's project folder into a "src" folder in the virtualenv of the website and then to a python setup.py develop into the virtualenv so that the newest changes are always used on the website "in real time".

Categories