Reusable Django apps + Ansible provisioning - python

I'm a long-time Django developer and have just started using Ansible, after using Vagrant for the last 18 months. Historically I've created a single VM for development of all my projects, and symlinked the reusable Django apps (Python packages) I create, to the site-packages directory.
I've got a working dev box for my latest Django project, but I can't really make changes to my own reusable apps without having to copy those changes back to a Git repo. Here's my ideal scenario:
I checkout all the packages I need to develop as Git submodules within the site I'm working on
I have some way (symlinking or a better method) to tell Ansible to setup the box and install my packages from these Git submodules
I run vagrant up or vagrant provision
It reads requirements.txt and installs the remaining packages (things like South, Pillow, etc), but it skips my set of tools because it knows they're already installed
I hope that makes sense. Basically, imagine I'm developing Django. How do I tell Vagrant (via Ansible I assume) to find my local copy of Django, rather than the one from PyPi?
Currently the only way I can think of doing this is creating individual symlinks for each of those packages I'm developing, but I'm sure there's a more sensible model.
Thanks!

You should probably think of it slightly differently. You create a Vagrant file which specifies Ansible as a provisioner. In that Vagrant file you also specify what playbook to use for your vagrant provision portion.
If your playbooks are written in an idempotent way, running them multiple times will skip steps that already match the desired state.
You should also think about what your desired end-state of a VM should look like and write playbooks to accomplish that. Unless I'm misunderstanding something, all your playbook actions should be happening inside of VM, not directly on your local machine.

Related

How to set up a local dev environment for Django

Python web development newbie question here. I'm coming from PHP/Laravel and there you have Homestead which is a pre-configured Vagrant box for local development. In a so-called Homestead file, you configure everything such as webserver, database or PHP version. Are there any similar pre-configured dev environments for Django?
I already googled and there don't seem to be any official or widely-used Vagrant boxes for Django. The official Django tutorial even tells you how to install and set up Apache and your preferred database. This is a lot of work everytime you want to create a new Django project, especially if those projects run in different production environments. All the other tutorials I've found just explain how you set up virtual environments with venv or the like. But that doesn't seem to be sufficient to me. What you obviously want is a dev environment that is as close as possible to your production environment, so you need some kind of virtual machines.
I'm a little bit confused right now. Do you just grab some plain Ubuntu (or any other OS) Vagrant box and install everything yourself? Don't you use Vagrant at all but something else? Did I miss something and the Python web development workflow is completely different?
The typical local development in Django just uses the builtin web server and an SQLite database. The steps to get that up and running are:
Ensure you have the desired version of Python installed.
Create a virtual env to isolate libraries needed for your project from the rest of the system (this is optional by highly recommended, I'd actually recommend using Poetry).
Install Django, probably via pip.
Run manage.py runserver (and migrate the database and set up a superuser, yada yada).
That's pretty much it and sufficient for local development. What you need to be aware of is that some differences exist between SQLite and Postgres, MySQL etc., and if you hit the spots where the difference is important, you'll want to set up your targeted database as well to develop directly against it. That can probably happen in a Docker container if that makes sense for you. But there's little reason to put Django into a container during development, unless your project is especially complex and requires simulating certain conditions which the builtin server somehow can't.
Does this help?
$ python3 -m venv my_env # create your virtual environment
$ source my_env/bin/activate # Any package you install will be inside this environment
$ pip install -r requirements.txt # can also install packages indivdually
$ deactivate # get out of the isolated environment
Here's the doc

Trying to make a development instance for a Python pyramid project

So I have this Python pyramid-based application, and my development workflow has basically just been to upload changed files directly to the production area.
Coming close to launch, and obviously that's not going to work anymore.
I managed to edit the connection strings and development.ini and point the development instance to a secondary database.
Now I just have to figure out how to create another copy of the project somewhere where I can work on things and then make the changes live.
At first, I thought that I could just make a copy of the project directory somewhere else and run it with different arguments pointing to the new location. That didn't work.
Then, I basically set up an entirely new project called myproject-dev. I went through the setup instructions:
I used pcreate, and then setup.py develop, and then I copied over my development.ini from my project and carefully edited the various references to myproject-dev instead of myproject.
Then,
initialize_myproject-dev_db /var/www/projects/myproject/development.ini
Finally, I get a nice pyramid welcome page that everything is working correctly.
I thought at that point I could just blow out everything in the project directory and copy over the main project files, but then I got that feeling in the pit of my stomach when I noticed that a lot of things weren't working, like static URLs.
Apparently, I'm referencing myproject in includes and also static URLs, and who knows where else.
I don't think this idea is going to work, so I've given up for now.
Can anyone give me an idea of how people go about setting up a development instance for a Python pyramid project?
The first thing you should do, if it's not the case, is version control your project. I'd recommend using git.
In addition to the benefits of managing the changes made to the application when developing, it will aldo make it easier to share copies between developers... or with the production deployment. Indeed, production can just be a git clone of the project, just like your development instance.
The second thing is you need to install the project in your Python library path. This is how all the imports and includes are going to work.
I'd recommend creating a virtual environment for this, with either virtualenv or pew, so that your app (and its dependencies) are "isolated" from the rest of your system and other apps.
You probably have a setup.py script in your project. If not, create one. Then install your project with pip install . in production, or pip install -e . in development.
Here's how I managed my last Pyramid app:
I had both a development.ini and a production.ini. I actually had a development.local.ini in addition to the other two - one for local development, one for our "test" system, and one for production. I used git for version control, and had a main branch for production deployments. On my prod server I created the virtual environment, etc., then would pull my main branch and run using the production.ini config file. Updates basically involved jumping back into the virtualenv and pulling latest updates from the repo, then restarting the pyramid server.

Deploying python site to production server

I have django website in testing server and i am confused with how should the deployement procedure goes.
Locally i have these folders
code
virtualenv
static
static/app/bower_components
node_modules
Current on git i only have code folder in there.
My Initial thought was to do this on production server
git clone repo
pip install
npm install
bower install
colectstatic
But i had this problem that sometimes some components in pip or npm or bowel fail to install and then production deployemnet fails.
I was thinking of put everything in static, bower, npm etc inside git so that i can fetch all in prodcution.
Is that the right way to do. i want to know the right way to tackle that problem
But i had this problem that sometimes some components in pip or npm or
bowel fail to install and then production deployment fails.
There is no solution for this other than to find out why things are failing in production (or a way around would be to not install anything in production, just copy stuff over).
I would caution against the second option because Python virtual environments are not designed to be portable. If you have components such as PIL/Pillow or database drivers, these need system libraries to be installed and compiled against at build time.
Here is what I would recommend, which is in-line with the deployment section in the documentation:
Create an updated requirements file (pip freeze > requirements.txt)
Run collectstatic on your testing environment.
Move the static directory to your frontend/proxy machine, and map it to STATIC_URL. Confirm this works by browsing the static URL (for example: http://example.com/static/images/logo.png)
Clone/copy your codebase to the production server.
Create a blank virtual environment.
Install dependencies with pip install -r requirements.txt
Make sure you run through the deployment checklist, which includes security tips and settings you need to enable for production.
After this point, you can bring up your django server using your favorite method.
There are many, many guides on deploying django and many are customized for particular environments (for example, AWS automation, Heroku deployment tips, Digital Ocean, etc.) You can browse those for ideas (I usually pick out any automation tips) but be careful adopting one strategy without making sure it works with your particular environment/requirements.
In addition this might be helpful for some guidelines on deployment.

Using git post-receive hook to deploy python application in virtualenv

My goal is to be able to deploy a Django application to one of two environments (DEV or PROD) based on the Git branch that was committed and pushed to a repository. This repository is hosted on the same server as the Django applications are being run on.
Right now, I have two virtualenvs set up. One for each environment. They are identical. I envision them only changing if the requirements.txt is modified in my repository.
I've seen tutorials around the internet that offer deployments via git by hosting the repository directly in the location where the application will be deployed. This doesn't work for my architecture. I'm using RhodeCode to host/manage the repository. I'd like to be able to use a post-receive (or other if it's more appropriate) hook to trigger the update to the appropriate environment.
Something similar to this answer will allow me to narrow down which environment I want to focus on.
When I put source activate command in an external script (ie. my hook), the script stops at that command. The virtualenv is started appropriately, but any further actions in the script (ie. pip install -r requirements.txt or ./manage.py migrate) aren't executed.
My question, is how can I have that hook run the associated virtualenv? Or, if it is already running, update it appropriately with the new requirements.txt, South migrations, and application code?
Is this work flow overly complicated? Theoretically, it should be as simple as git push to the appropriate branch.

Caching Python requirements for production deployments

I'm building various python-based projects that use pip/buildout to install dependencies. But I don't like the idea of someone deleting a github project and crippling my apps, or a network outage meaning I can't perform a deployment.
How do other people solve this?
I've got various ideas, but I think perhaps the one that sounds most promising would be some kind of caching proxy server. I'd point pip to use this internal proxy server which would cache a copy of the downloaded project, and periodically check for updates (if there's a net connection) before serving cached versions.
Does anything like this already exist?
Use case:
I have a project which I deploy to web server 1. I add new features with a remote dependency, and when I come to update to the production web server, PyPi is down so I can't deploy. Or perhaps when I come to set up a new web server, a dependency has disappeared from github or wherever.
How can I make it so my deployments/dev environments can always be brought up regardless of what happens in the wider world?
Also, when I deploy, I won't deploy over the top of existing code. Rather I'll build a new virtualenv and switch over to it so I can rollback if anything goes wrong. So each time I deploy I'll need to rebuild my environment and will need dependencies to exist.
So I'm looking for a solution that will insulate me against short-term network outages to servers hosting dependencies, as well as guarding against projects being deleted.
You should keep a "reference copy" of the projects on which you depend.
If someone removes the project from GitHub (and PyPi and all the mirrors, and every other site on the net) then you have the source and can now distribute it.
I have exactly the same requirements, and also use buildout to manage my deployments. I try not to install ANY of my package dependencies system-wide; I let buildout install eggs for all of them into my buildout. That way if I depend on a newer version of some package in rev N+1 of my project, and at "go-live" time N+1 falls on its face, I can roll back to N and automatically get the packge dependencies that N worked with.
We run a private eggbasket server, and configure buildout to fetch packages only from that. Server contents were initialized by allowing buildout to grab eggs from the network one time, then copying the downloaded eggs.
This way, upgrades to each package are totally under control and I can ensure that 2 successive buildouts of the same snapshot of my code will build out the same thing. When I want to upgrade all, I will let buildout fetch most-recent-versions again, test test test, then copy my eggs to the eggbasket server to go into production mode.
This is what I'm looking for:
http://pypi.python.org/pypi/collective.eggproxy

Categories