Correct way to update live django web application - python

Before the actual problem let me explain our architecture. We are using git through ssh on our servers and have post-recieve hooks enabled to setup code. The code is all maintained in a separate folder. What we need is whenever a person pushes code to the server, it runs tests,migrations and updates it on live site. Currently whenever the application undergoes update in model it crashes.
What we need is a way that the hooks script detect if the code is proper, By proper i mean no syntax error etc, then run migrations and update the current application with the new codes without downtime. We are using nginx to proxy to django application,virtualenv for packages install from requirements.txt file and gunicorn for deployment.
The base line is that if there is failure at any point the push commit should be rejected. and if all tests are successfull, it should make migrations to dbs and start with the new app.
A though that i had was to use two ports for the same . One runing the main application and another with the push commits. If pushed codes were successfully tested , change port on nginx to git application and have nginx reload. Please discuss drawbacks of this application if any. And a sample post-commit script to show how to reject git commit in case of failure.

Consider using fabric. Fabric will allow you to create pythonic scripts and you can run deployments in remote server creating a new database and check whether the migrations are done safe. Once all good you can mention in your fabric script to deploy in prod or if fails mention in fabric to send an email.
This makes you life simple.

Related

Docker vs old approach (supervisor, git, your project)

I'm on Docker for past weeks and I can say I love it and I get the idea. But what I can't figure out is how can I "transfer" my current set-up on Docker solution. I guess I'm not the only one and here is what I mean.
I'm Python guys, more specifically Django. So I usually have this:
Debian installation
My app on the server (from git repo).
Virtualenv with all the app dependencies
Supervisor that handles Gunicorn that runs my Django app.
The thing is when I want to upgrade and/or restart the app (I use fabric for these tasks) I connect to the server, navigate to the app folder, run git pull, restart the supervisor task that handles Gunicorn which reloads my app. Boom, done.
But what is the right (better, more Docker-ish) approach to modify this setup when I use Docker? Should I connect to docker image bash somehow everytime I want upgrade the app and run the upgrade or (from what I saw) should I like expose the app into folder out-of docker image and run the standard upgrade process?
Hope you get the confusion of old school dude. I bet Docker guys were thinking about that.
Cheers!
For development, docker users will typically mount a folder from their build directory into the container at the same location the Dockerfile would otherwise COPY it. This allows for rapid development where at most you need to bounce the container rather than rebuild the image.
For production, you want to include everything in the image and not change it, only persistent data goes in the volumes, your code is in the image. When you make a change to the code, you build a new image and replace the running container in production.
Logging into the container and manually updating things is something I only do to test while developing the Dockerfile, not to manage a developing application.

What do you use for running scheduled tasks in Production for Python?

The thing is, I read this post stating best practices to set up a code to run at every specified interval over a period of time using the python library - APS Scheduler. Now, it obviously works perfectly fine if I do it on a test environment and run the application from the command prompt.
However, I come from a background where most my projects are university level and never ran in production but for this one, I would like to. I have access to AWS and can configure any kind of server on AWS and I am open to other options as well. It would be great if I could get a headstart on what to look if I have to run this application as a service from a server or a remote machine without having to constantly monitoring it and providing interrupts on command prompt.
I do not have any experience of running Python applications in production so any input would be appreciated. Also, I do not know how to execute this code in production (except for through aws cli) but that session expires once I close my CLI so that does not seem like the most appropriate way to do it so any help on that end would be appreciated too.
The Answer was very simple and does not make a lot of sense and might not be applicable to all.
Now, what I had was a python flask application so I configured the app in a virtual environment using eb-virt on the aws server and then created an executable wsgi script which I then ran as a service using mod_wsgi plugin for the apache http server and then I was able to run my app.

Can I use Heroku as a Python server?

My web host does not have python and I am trying to build a machine learning application. I know that heroku lets you use python. I was wondering if I could use heroku as a python server? As in I would let heroku do all of the python processing for me and use my regular domain for everything else.
Yes, and it may be a pain at first but once it is set I would say Heroku is the easiest platform to continually deploy to. However, it is not intuitive - don't try and just 'take a stab' at it; follow a tutorial and try and understand why Heroku works the way it does.
Following the docs is a good bet; Heroku has great documentation for the most part.
Here's the generalized workflow for deploying to Heroku:
Locally, create your project and use virtualenv to install/manage
libraries.
Initialize a git repository in the base dir for your
Python project; create a heroku remote (heroku create)
Create a
procfile for Heroku to use when starting gunicorn (or see
the options for using waitress/etc); this is used by Heroku to start your process
cd to your base dir; freeze
your virtualenv (pip freeze > requirements.txt) and add/commit
requirements.txt. This tells Heroku what packages need to be installed, a requirement for your deployment to work. If you are trying to run a Python project and there are required packages missing, the app will be unable to start and Heroku will display an Internal Server Error.
Whenever changes are made, git commit your changes and git push heroku master to push all commits to Heroku. This will cause Heroku to restart the server application with your updated deployment. If there's a failure, you can use heroku rollback to just return to your last deployment.
In reality, it's not a pain in the ass, just particular. Knowing the rules of Heroku, you are able to manage your deployment with command-line git commands with ease.
One caveat - If deploying Django, Flask applications etc there are peculiarities to account for; specifically, non-project files (including assets) should NOT be stored on Heroku as Heroku periodically restarts your 'dyno' (server instance(s)), loading the whole project from the latest push to Heroku. With Django and Flask, this typically means serving assets/static/media files from an Amazon S3 bucket.
That being said, if you use virtualenv properly, provision your databases, and follow Heroku practices for serving files and commiting updates, it is (imho) the absolute best platform out there for ease of use, reliable uptime, and well-oiled rolling deployments.
One last tip - if you are creating a Django app, I'd suggest starting your project out of this boilerplate. I have a custom one I use for new projects and can start and publish a project in minutes.
Yes, you can use Heroku as a python server. I put a Python Flask server on Heroku but it was a pain: Heroku seemed to have some difficulties, and there were lots of conflicting advice on getting around those. I eventually got it working, can't remember what web page had the ultimate answer but you might look at this one: http://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-xviii-deployment-on-the-heroku-cloud
Have you done your Python Server on Heroku by using twisted?
I don't know if this can help you.
I see the doc 'Getting Started on Heroku with Python' is about the Django.
It is sure that Heroku can use Twisted from docs
Pure Python applications, such as headless processes and evented web frameworks like Twisted, are fully supported.
django-twisted-server has twisted in django but it isn't on Heroku.

How to setup Git to deploy python app files into Ubuntu Server?

I setup a new Ubuntu 12.10 Server on VPN hosting. I have installed all the required setup like Nginx, Python, MySQL etc. I am configuring this to deploy a Flask + Python app using uWSGI. Its working fine.
But to create a basic app i used Putty tool (from Windows) and created required app .py files.
But I want to setup a Git functionality so that i can push my code to required directory say /var/www/mysite.com/app_data so that i don't have to use SSH or FileZilla etc everytime i make some changes into my website.
Since i use both Ubuntu & Windows for development of app, setting up a Git kind of functionality would help me push or change my data easily to my Cloud Server.
How can i setup a Git functionality in Ubuntu ? and How could i access it and Deploy data using tools like GitBash etc. ?
Please Suggest
Modified version of innaM:
Concept
Have three repositories
devel - development on your local development machine
central - repository server - like GitHub, Bitbucket or anything other
prod - production server
Then you commit things from devel to central and as soon as you want to deploy on prod, than you ask prod to pull data from prod.
"asking" prod server to pull the updates can be managed by cron (then you have to wait a moment) or you may use other means like one shot call of ssh asking to do git pull and possibly restart your app.
Step by step
In more details you can go this way.
Prepare repo on devel
Develop and test the app on your devel server.
Put it into local repository:
$ git init
$ git add *
$ git commit -m "initial commit"
Create repo on central server
E.g. bitbucket provides this description: https://confluence.atlassian.com/display/BITBUCKET/Import+code+from+an+existing+project
Generally, you create the project on Bitbucket, find the url of it and then from your devel repo call:
$ git remote add origin <bitbucket-repo-url>
$ git push origin
Clone central repo to prod server
Log onto your prod server.
Go to /var/www and clone form bitucket:
$ cd /var/www
$ git clone <bitbucket-repo-url>
$ cd mysite.com
and you shall have your directory ready.
Trigger publication of updates to prod3
There are numerous options. One being a cron task, which would regularly call
$ git pull
In case, your app needs restart afte an update, then you have to ensure, the restart would happen (this shall be possible using git log command, which will show new line after the update, or you may check, if status code would tell you.
Personally I would use "one shot ssh" (you asked not to use ssh, but I assume you are asking for "simpler" solution, so one shot call shall work simpler then using ftp, scp or other magic.
From your devel machine (assuming you have ssh access there):
$ ssh user#prod.server.com "cd /var/www/mysite.com && git pull origin && myapp restart"
Advantage is, that you do control the moment, the update happens.
Discussion
I use similar workflow.
rsync seems in many cases serve well enough or better (be aware of files being created at app runtime and by files in your app, which shall be removed during ongoing versions and shall be removed on server too).
salt (saltstack) could serve too, but requires a bit more learning and setup).
I have learned, that keeping source code and configuration data in the same repo makes sometime situation more dificult (that is why I am working on using salt).
fab command from Fabric (python based) may be best option (in case installation on Windows becomes difficult, look at http://ridingpython.blogspot.cz/2011/07/installing-fabric-on-windows.html
Create a bare repository on your server.
Configure your local repository to use the repository on the server as a remote.
When working on your local workstation, commmit your changes and push them to the repository on your server.
Create a post-receive hook in the server repository that calls "git archive" and thus transfers your files to some other directory on the server.

Django same project multiple times on one server

I had to develop something in Django (new to it) and it went quite smoothly. But after delivering to the client I had to setup a second "testing" instance so that any new features would be tested on it to avoid errors in the production one.
And I have only one apache server at my disposal and this breed some weird things.
I run my applications by adding path to the wsgi script in the httpd.conf.
It works fine, the new server is up and running. it used a different database so all is good. But it doesent use the views and models from its folder, it used the ones from the original app instead and I just ran out of ideas on how to fix it. Please help me in some way.
I believe that your two django projects should be deployed on your staging and production server as two completely seperate projects/directories.
If you use version control, this could be as trivial as branching your main project and adding the new features. After you have two seperate code bases you can put your fixed branch on your production server.
Your project can exist anywhere on your server. You could set up a staging subdomain and create a virtualhost that points to your django project branch
http://httpd.apache.org/docs/2.2/vhosts/examples.html
This would allow both projects to exist on the same server, without one project having to be aware of the other

Categories