DispatcherMiddleware accessing distributed apps on local/remote servers - python

I am trying to run a Flask app locally which also needs to access resources run on a separate Flask app deployed on a remote server on the local network. I thought this may be possible using a DispatcherMiddleware layer locally, so, based on examples from:
http://flask.pocoo.org/docs/0.12/patterns/appdispatch/
How to implement Flask Application Dispatching by Path with WSGI?
Both examples require the DispatcherMiddleware layer (running locally) to have access to the constituent apps (which may be on a remote server) for example:
from app import app as app1
from app2.app import app as app2
from app3.app import app as app3
application = DispatcherMiddleware(app1, {
'/app2': app2, ##may be remote
'/app3': app3 ##may be remote
})
Is there any way to achieve this pattern given the distributed apps, short of creating a network share mounting the path to the remote server and importing over a network share?

DispatcherMiddleware is for serving multiple WSGI applications (like Flask) with one WSGI server (like Gunicorn). The WSGI server runs the app(s), the web server (like Nginx) passes requests to the WSGI server.
If your apps are distributed, then they would be run on their own machines. A WSGI server (and software in general) can't run things on other machines. Using DispatcherMiddleware makes no sense for that.

Related

Python serverless webapp vs WSGI server

I'm developing a web application with Python Flask.
I read something about WSGI server and the warning message "WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead."
If I'm using GKE or App Engine or Cloud Ran, is WSGI something I need to learn?
TLDR: No, you don't. You just need to know how to run your App with Flask. Having said that, it doesn't mean that learning/understanding WSGI is a waste but it isn't required.
Longer response
Gunicorn is a Production WSGI Webserver and this is what Google uses on their Production server to run your Apps for Google App Engine (if your App doesn't contain an entrypoint). Waitress is another Production WSGI webserver.
You don't necessarily have to 'learn' how to use any of them or the intricacies of WSGI to be able to build an App. Learning and understanding how Flask works is good enough
For Google App Engine
Just build and test your App on your dev environment with Flask (e.g. run your app with flask run main.py. When you deploy your App to Google App Engine, it will be run with Gunicorn (unless you specified an entrypoint that doesn't use gunicorn)
If on the other hand you use dev_appserver.py to run your app locally e.g. you run your app with dev_appserver.py app.yaml, gcloud CLI will first install gunicorn and then use it to run your App on your local machine.
In both of these instances, you don't have to be an expert on WSGI or gunicorn. Just knowing enough to run your app with Flask is what you need.
However, note that you can't run Python 3 Apps locally with dev_appserver.py on a Windows machine (see google documentation). I believe it's because gunicorn doesn't run on Windows. But if you still want to use dev_appserver.py for Python 3 Apps on a Windows machine, you can check out a Patch we created (the patch essentially swaps out Gunicorn for Waitress when running your App on your dev machine)
For Cloud Run
You can code and test your App with Flask and then use gunicorn in the container (you don't necessarily have to be an expert or know a lot about gunicorn). See 'hello world' sample application from Google

How to properly serve a Django application throught Twisted's Web server?

I am building a system that was some components that will be run in its own process or thread. They need to communicate with each other. One of those components is a Django application, the internal communication with the Django app will not be done through HTTP. Looking for networking libraries I found Twisted (awesome library!), reading its documentation I found that Twisted implements the WSGI specification too, so I thought its Web server could serve WSGI applications like Django. Following the docs I come with the following script to serve the Django app:
from twisted.web import server
from twisted.internet import reactor, endpoints
from twisted.web.wsgi import WSGIResource
from twisted.python.threadpool import ThreadPool
from mysite.wsgi import application as django_application
# Create and start a thread pool to handle incoming HTTP requests
djangoweb_threadpool = ThreadPool()
djangoweb_threadpool.start()
# Cleanup the threads when Twisted stops
reactor.addSystemEventTrigger('after', 'shutdown', djangoweb_threadpool.stop)
# Setup a twisted Service that will run the Django web app
djangoweb_request_handler = server.Site(WSGIResource(reactor, djangoweb_threadpool, django_application))
djangoweb_server = endpoints.TCP4ServerEndpoint(reactor, 8000)
djangoweb_server.listen(djangoweb_request_handler)
reactor.run()
Save it in a file like runserver.py in the same directory of manage.py, you can start the WSGI server by running python runserver.py.
I made a django view that does a blocking call to time.sleep() to test it, it worked fine. Since it's multithread, it did not block other requests. So I think it works well with the synchronous Django code. I could setup another service with a custom protocol as a gateway for internal communication.
1) Does that script properly loads the Django app? It will work the same way as other WSGI servers like gunicorn and uwsgi?
2) Will that threads be run in parallel?
hendrix is a project that lets you run django via twisted. It looks like it can run other twisted services if desired (https://hendrix.readthedocs.io/en/latest/deploying-other-services/).
If you're in the early stages of developement, consider klein. It's more akin to flask than django though.

Set keep-alive timeout for Flask server with parameter [duplicate]

Setting up Flask with uWSGI and Nginx can be difficult. I tried following this DigitalOcean tutorial and still had trouble. Even with buildout scripts it takes time, and I need to write instructions to follow next time.
If I don't expect a lot of traffic, or the app is private, does it make sense to run it without uWSGI? Flask can listen to a port. Can Nginx just forward requests?
Does it make sense to not use Nginx either, just running bare Flask app on a port?
When you "run Flask" you are actually running Werkzeug's development WSGI server, and passing your Flask app as the WSGI callable.
The development server is not intended for use in production. It is not designed to be particularly efficient, stable, or secure. It does not support all the possible features of a HTTP server.
Replace the Werkzeug dev server with a production-ready WSGI server such as Gunicorn or uWSGI when moving to production, no matter where the app will be available.
The answer is similar for "should I use a web server". WSGI servers happen to have HTTP servers but they will not be as good as a dedicated production HTTP server (Nginx, Apache, etc.).
Flask documents how to deploy in various ways. Many hosting providers also have documentation about deploying Python or Flask.
First create the app:
import flask
app = flask.Flask(__name__)
Then set up the routes, and then when you want to start the app:
import gevent.pywsgi
app_server = gevent.pywsgi.WSGIServer((host, port), app)
app_server.serve_forever()
Call this script to run the application rather than having to tell gunicorn or uWSGI to run it.
I wanted the utility of Flask to build a web application, but had trouble composing it with other elements. I eventually found that gevent.pywsgi.WSGIServer was what I needed. After the call to app_server.serve_forever(), call app_server.stop() when to exit the application.
In my deployment, my application is listening on localhost:port using Flask and gevent, and then I have Nginx reverse-proxying HTTPS requests to it.
You definitely need something like a production WSGI server such as Gunicorn, because the development server of Flask is meant for ease of development without much configuration for fine-tuning and optimization.
Eg. Gunicorn has a variety of configurations depending on the use case you are trying to solve. But the development flask server does not have these capabilities. In addition, these development servers show their limitations as soon as you try to scale and handle more requests.
With respect to needing a reverse proxy server such as Nginx is concerned it depends on your use case.
If you are deploying your application behind the latest load balancer in AWS such as an application load balancer(NOT classic load balancer), that itself will suffice for most use cases. No need to take effort into setting up NGINX if you have that option.
The purpose of a reverse proxy is to handle slow clients, meaning clients which take time to send the request. These reverse load balancers buffer the requests till the entire request is got from the clients and send them async to Gunicorn. This improves the performance of your application considerably.

Am I using Python Flasks built in server?

I am building a back end in python via the python flask application from IBM Cloud/Bluemix. I have heard/read a lot about people complaining regarding that Flasks built in server isn’t good for production. But how do I know if the application uses the Flask built in server or if IBM sets something else? Is there a simple way to see this in the code?
Deploying the Flask boilerplate app from the IBM cloud catalogue will indeed deploy a Flask application running on the Flask dev webserver.
You will need to alter the application if you want to run a production WSGI server.
I work for IBM and am in this stuff all day every day.
If you want to verify this, SSH into your application container on Cloud Foundry with the bash command
cf ssh <yourappnamehere>
You will need to have either the bluemix or cloud foundry CLIs installed and be logged in to the relevant endpoint before submitting this command.
It will open a bash shell in your application container, and you can cd around and open and/or download your project files for inspection.
This line:
app = Flask(__name__)
is a sure fire way to know that you are running a Flask web server application.
If you are concerned with which WSGI server your application is running under, checking your procfile (you should see this when SSHing int your container) will show you which command starts your application. If the command is
python <yourapp>.py
then you are running the dev server. Otherwise, you would be running some other python file, most likely via the server's command rather than the python command, that would import your application as a dependency.
You can also take a look at whether or not any WSGI server libraries were downloaded during the compilation of your droplet, and what command was used to start your application with
cf logs <yourappname> --recent
after deploying it.
Or, you can just believe me that the boilerplate deploys a Flask app under a Flask dev server.
A tutorial on running Flask on a different WSGI server:
https://www.digitalocean.com/community/tutorials/how-to-serve-flask-applications-with-uwsgi-and-nginx-on-ubuntu-14-04

Are a WSGI server and HTTP server required to serve a Flask app?

Setting up Flask with uWSGI and Nginx can be difficult. I tried following this DigitalOcean tutorial and still had trouble. Even with buildout scripts it takes time, and I need to write instructions to follow next time.
If I don't expect a lot of traffic, or the app is private, does it make sense to run it without uWSGI? Flask can listen to a port. Can Nginx just forward requests?
Does it make sense to not use Nginx either, just running bare Flask app on a port?
When you "run Flask" you are actually running Werkzeug's development WSGI server, and passing your Flask app as the WSGI callable.
The development server is not intended for use in production. It is not designed to be particularly efficient, stable, or secure. It does not support all the possible features of a HTTP server.
Replace the Werkzeug dev server with a production-ready WSGI server such as Gunicorn or uWSGI when moving to production, no matter where the app will be available.
The answer is similar for "should I use a web server". WSGI servers happen to have HTTP servers but they will not be as good as a dedicated production HTTP server (Nginx, Apache, etc.).
Flask documents how to deploy in various ways. Many hosting providers also have documentation about deploying Python or Flask.
First create the app:
import flask
app = flask.Flask(__name__)
Then set up the routes, and then when you want to start the app:
import gevent.pywsgi
app_server = gevent.pywsgi.WSGIServer((host, port), app)
app_server.serve_forever()
Call this script to run the application rather than having to tell gunicorn or uWSGI to run it.
I wanted the utility of Flask to build a web application, but had trouble composing it with other elements. I eventually found that gevent.pywsgi.WSGIServer was what I needed. After the call to app_server.serve_forever(), call app_server.stop() when to exit the application.
In my deployment, my application is listening on localhost:port using Flask and gevent, and then I have Nginx reverse-proxying HTTPS requests to it.
You definitely need something like a production WSGI server such as Gunicorn, because the development server of Flask is meant for ease of development without much configuration for fine-tuning and optimization.
Eg. Gunicorn has a variety of configurations depending on the use case you are trying to solve. But the development flask server does not have these capabilities. In addition, these development servers show their limitations as soon as you try to scale and handle more requests.
With respect to needing a reverse proxy server such as Nginx is concerned it depends on your use case.
If you are deploying your application behind the latest load balancer in AWS such as an application load balancer(NOT classic load balancer), that itself will suffice for most use cases. No need to take effort into setting up NGINX if you have that option.
The purpose of a reverse proxy is to handle slow clients, meaning clients which take time to send the request. These reverse load balancers buffer the requests till the entire request is got from the clients and send them async to Gunicorn. This improves the performance of your application considerably.

Categories