Running a daemonized bottle application with nginx - python

I have a bottle application (specifically, homu) which I want to deploy on a server.
The traditional way to run this application is to just run the program (eg python whatever.py), without any server integration. The application is stateful and updates its state by listening to the github webhooks api. It also has a configuration panel that bottle delivers.
Whilst it is able to recover from a crash, this requires a lot of GitHub API requests (which get throttled), so it's preferable to have it running continuously.
Now, I know how to daemonize a bottle application, but this requires running it as a separate program running on a separate port from nginx. I'd like to have nginx delegate certain paths to the running bottle application.
How do I do this?
(Alternatively, a way for me to set it up so that nginx is responsible for keeping it running is nice too)

One way to do this would be to reverse-proxy it.
location /foo/bar {
proxy_pass http://localhost:someport/;
}
and then run the bottle application on someport

Related

Set keep-alive timeout for Flask server with parameter [duplicate]

Setting up Flask with uWSGI and Nginx can be difficult. I tried following this DigitalOcean tutorial and still had trouble. Even with buildout scripts it takes time, and I need to write instructions to follow next time.
If I don't expect a lot of traffic, or the app is private, does it make sense to run it without uWSGI? Flask can listen to a port. Can Nginx just forward requests?
Does it make sense to not use Nginx either, just running bare Flask app on a port?
When you "run Flask" you are actually running Werkzeug's development WSGI server, and passing your Flask app as the WSGI callable.
The development server is not intended for use in production. It is not designed to be particularly efficient, stable, or secure. It does not support all the possible features of a HTTP server.
Replace the Werkzeug dev server with a production-ready WSGI server such as Gunicorn or uWSGI when moving to production, no matter where the app will be available.
The answer is similar for "should I use a web server". WSGI servers happen to have HTTP servers but they will not be as good as a dedicated production HTTP server (Nginx, Apache, etc.).
Flask documents how to deploy in various ways. Many hosting providers also have documentation about deploying Python or Flask.
First create the app:
import flask
app = flask.Flask(__name__)
Then set up the routes, and then when you want to start the app:
import gevent.pywsgi
app_server = gevent.pywsgi.WSGIServer((host, port), app)
app_server.serve_forever()
Call this script to run the application rather than having to tell gunicorn or uWSGI to run it.
I wanted the utility of Flask to build a web application, but had trouble composing it with other elements. I eventually found that gevent.pywsgi.WSGIServer was what I needed. After the call to app_server.serve_forever(), call app_server.stop() when to exit the application.
In my deployment, my application is listening on localhost:port using Flask and gevent, and then I have Nginx reverse-proxying HTTPS requests to it.
You definitely need something like a production WSGI server such as Gunicorn, because the development server of Flask is meant for ease of development without much configuration for fine-tuning and optimization.
Eg. Gunicorn has a variety of configurations depending on the use case you are trying to solve. But the development flask server does not have these capabilities. In addition, these development servers show their limitations as soon as you try to scale and handle more requests.
With respect to needing a reverse proxy server such as Nginx is concerned it depends on your use case.
If you are deploying your application behind the latest load balancer in AWS such as an application load balancer(NOT classic load balancer), that itself will suffice for most use cases. No need to take effort into setting up NGINX if you have that option.
The purpose of a reverse proxy is to handle slow clients, meaning clients which take time to send the request. These reverse load balancers buffer the requests till the entire request is got from the clients and send them async to Gunicorn. This improves the performance of your application considerably.

Deploying updates to a Flask application

I'm looking for high-level insight here, as someone coming from the PHP ecosystem. What's the common way to deploy updates to a live Flask application thats running on a single server (no load balancing nodes), served by some WSGI like Gunicorn behind Nginx?
Specifically, when you pull updates from a git repository or rsync files to the server, I'm assuming this leaves a small window where a request can come through to the application while its files are changing.
I've mostly deployed Laravel applications for production, so to prevent this is use php artisan down to throw up a maintenance page while files copy, and php artisan up to bring the site back up when its all done.
What's the equivalent with Flask, or is there some other way of handling this (Nginx config)?
Thanks
Looks like Docker might be my best bet:
Have Nginx running on the host, and the application running in container A with Gunicorn. Nginx directs traffic to container A.
Before starting the file sync, tear down container A and start up container B, which listens on the same local port. Container B can be a maintenance page or a copy of the application.
Start file sync and wait for it to finish. When done, tear down container B, and start container A again.

Are a WSGI server and HTTP server required to serve a Flask app?

Setting up Flask with uWSGI and Nginx can be difficult. I tried following this DigitalOcean tutorial and still had trouble. Even with buildout scripts it takes time, and I need to write instructions to follow next time.
If I don't expect a lot of traffic, or the app is private, does it make sense to run it without uWSGI? Flask can listen to a port. Can Nginx just forward requests?
Does it make sense to not use Nginx either, just running bare Flask app on a port?
When you "run Flask" you are actually running Werkzeug's development WSGI server, and passing your Flask app as the WSGI callable.
The development server is not intended for use in production. It is not designed to be particularly efficient, stable, or secure. It does not support all the possible features of a HTTP server.
Replace the Werkzeug dev server with a production-ready WSGI server such as Gunicorn or uWSGI when moving to production, no matter where the app will be available.
The answer is similar for "should I use a web server". WSGI servers happen to have HTTP servers but they will not be as good as a dedicated production HTTP server (Nginx, Apache, etc.).
Flask documents how to deploy in various ways. Many hosting providers also have documentation about deploying Python or Flask.
First create the app:
import flask
app = flask.Flask(__name__)
Then set up the routes, and then when you want to start the app:
import gevent.pywsgi
app_server = gevent.pywsgi.WSGIServer((host, port), app)
app_server.serve_forever()
Call this script to run the application rather than having to tell gunicorn or uWSGI to run it.
I wanted the utility of Flask to build a web application, but had trouble composing it with other elements. I eventually found that gevent.pywsgi.WSGIServer was what I needed. After the call to app_server.serve_forever(), call app_server.stop() when to exit the application.
In my deployment, my application is listening on localhost:port using Flask and gevent, and then I have Nginx reverse-proxying HTTPS requests to it.
You definitely need something like a production WSGI server such as Gunicorn, because the development server of Flask is meant for ease of development without much configuration for fine-tuning and optimization.
Eg. Gunicorn has a variety of configurations depending on the use case you are trying to solve. But the development flask server does not have these capabilities. In addition, these development servers show their limitations as soon as you try to scale and handle more requests.
With respect to needing a reverse proxy server such as Nginx is concerned it depends on your use case.
If you are deploying your application behind the latest load balancer in AWS such as an application load balancer(NOT classic load balancer), that itself will suffice for most use cases. No need to take effort into setting up NGINX if you have that option.
The purpose of a reverse proxy is to handle slow clients, meaning clients which take time to send the request. These reverse load balancers buffer the requests till the entire request is got from the clients and send them async to Gunicorn. This improves the performance of your application considerably.

Best practice for making a django webapp restart itself

We have certain sysadmin settings that we expose to superusers of our django webapp. Things like the domain name (uses contrib.sites) and single sign-on configuration. Some of these settings are cached by the system, sometimes because we want to avoid an extra DB hit in the middleware on every request if we can help it, sometimes because it's contrib.sites, which has its own caching. So when the settings get changed, the changes don't take effect until the app is reloaded.
We want the app to restart itself when these changes are made, so that our clients don't need to pester us to do the restart for them.
Our webapp is running on apache via mod_wsgi, so we should be able to do this just by touching the wsgi file for the app whenever one of these settings is changed, but it feels a little weird to do that, and I'm worried there's some more graceful convention we should be following.
Is there a right way to apply updates that are cached and require the app to reload? Invalidating the caches for these things will be pretty hairy, so I think I'd avoid that unless the app restart thing has serious drawbacks.
For mod_wsgi read:
http://code.google.com/p/modwsgi/wiki/ReloadingSourceCode
Some other WSGI servers have similar options, but options in other WSGI servers are usually more limited.
If you use WSGI and your process is being watched by a controller like supervisord, gunicorn, uwsgi or similar then you can simply send yourself a SIGINT or SIGQUIT (depending on controller). It should shut down the current process gracefully and the controller will restart it for you.
import signal, os
os.kill(os.getpid(), signal.SIGQUIT)
If you are running it on apache with mod_wsgi, just update the timestamp of wsgi config file everytime you make change to a model. Apache automatically restarts the application if the wsgi file gets updated.
It depends on your setup:
If you are using wsgi on a single server you could touch the wsgi file to let apache restart every instance of the app
If you are using gunicorn you probably use supervisord to controll it. Then a supervisorctl restart APPNAME would be the solution
If you scale your app on multiple servers you have to ensure that every server restarts their instances. There are several ways to achieve this:
use the same filesystem if you are using mod_wsgi then a touch would count for every server
log in to the other servers using ssh and make them restart your app
I am sure there are more ways to restart your app but it highly depends on your setup and whether or not you have to restart all instances or only one.

Is there a way to deploy new code with Tornado/Python without restarting the server?

I've recently started to experiment with Python and Tornado web server/framework for web development. Previously, I have used PHP with my own framework on a LAMP stack. With PHP, deploying updated code/new code is as easy as uploading it to the server because of the way mod_php and Apache interact.
When I add new code or update code in Python/Tornado, do I need to restart the Tornado server? I could see this being problematic if you have a number of active users.
(a) Do I have to restart the server, or is there another/better way?
(b) If so, how can I avoid users being disconnected/getting errors/etc. while it's restarting (which could take a few seconds)?
[One possible thought is to use the page flipping paradigm with Nginx pointing to a server, launch the new server instance with updated code, redirect Nginx there and take down the original server...?]
It appears the best method is to use Nginx with multiple Tornado instances as I alluded to in my original question and as Cole mentions. Nginx can reload its configuration file on the fly . So the process looks like this:
Update Python/Tornado web application code
Start a new instance of the application on a different port
Update the configuration file of Nginx to point to the new instance (testing the syntax of the configuration file first)
Reload the Nginx configuration file with a kill -HUP command
Stop the old instance of Python/Tornado web server
A couple useful resources on Nginx regarding hot-swapping the configuration file:
https://calomel.org/nginx.html (in "Explaining the directives in nginx.conf" section)
http://wiki.nginx.org/CommandLine (in "Loading a New Configuration Using Signals" section)
Use HAProxy or Nginx and proxy to multiple Tornado processes, which you can then restart one by one. The Tornado docs cover Nginx, but it doesn't support websockets, so if you're using them you'll need HAProxy.
You could use a debug=True switch with the tornado web instance.
T_APP = tornado.web.Application(<URL_MAP>, debug=True)
This reflects the handler changes as and when they happen.
Is this what you are searching for?
A module to automatically restart the server when a module is modified.
http://www.tornadoweb.org/en/branch2.4/autoreload.html
If you just want to deploy new code with tornado/python during development without restarting the server, you can use the realtimefunc decorator in this GitHub repository.

Categories