Deploying a Web.py application with WSGI, several servers - python

I've created a web.py application, and now that it is ready to be deployed, I want to run in not on web.py's built-in webserver. I want to be able to run it on different webservers, Apache or IIS, without having to change my application code. This is where WSGI is supposed to come in, if I understand it correctly.
However, I don't understand what exacly I have to do to make my application deployable on a WSGI server? Most examples assume you are using Pylons/Django/other-framework, on which you simply run some magic command which fixes everything for you.
From what I understand of the (quite brief) web.py documentation, instead of running web.application(...).run(), I should use web.application(...).wsgifunc(). And then what?

Exactly what you need to do to host it with a specific WSGI hosting mechanism varies with the server.
For the case of Apache/mod_wsgi and Phusion Passenger, you just need to provide a WSGI script file which contains an object called 'application'. For web.py 0.2, this is the result of calling web.wsgifunc() with appropriate arguments. For web.py 0.3, you instead use wsgifunc() member function of object returned by web.application(). For details of these see mod_wsgi documentation:
http://code.google.com/p/modwsgi/wiki/IntegrationWithWebPy
If instead you are having to use FASTCGI, SCGI or AJP adapters for a server such as Lighttpd, nginx or Cherokee, then you need to use 'flup' package to provide a bridge between those language agnostic interfaces and WSGI. This involves calling a flup function with the same WSGI application object above that something like mod_wsgi or Phusion Passenger would use directly without the need for a bridge. For details of this see:
http://trac.saddi.com/flup/wiki/FlupServers
Important thing is to structure your web application so that it is in its own self contained set of modules. To work with a particular server, then create a separate script file as necessary to bridge between what that server requires and your application code. Your application code should always be outside of the web server document directory and only the script file that acts as bridge would be in server document directory if appropriate.

As of July 21 2009, there is a much fuller installation guide at the webpy install site, that discusses flup, fastcgi, apache and more. I haven't yet tried it, but it seems like it's much more detailed.

Here is an example of two hosted apps using cherrypy wsgi server:
#!/usr/bin/python
from web import wsgiserver
import web
# webpy wsgi app
urls = (
'/test.*', 'index'
)
class index:
def GET(self):
web.header("content-type", "text/html")
return "Hello, world1!"
application = web.application(urls, globals(), autoreload=False).wsgifunc()
# generic wsgi app
def my_blog_app(environ, start_response):
status = '200 OK'
response_headers = [('Content-type','text/plain')]
start_response(status, response_headers)
return ['Hello world! - blog\n']
"""
# single hosted app
server = wsgiserver.CherryPyWSGIServer(
('0.0.0.0', 8070), application,
server_name='www.cherrypy.example')
"""
# multiple hosted apps with WSGIPathInfoDispatcher
d = wsgiserver.WSGIPathInfoDispatcher({'/test': application, '/blog': my_blog_app})
server = wsgiserver.CherryPyWSGIServer(('0.0.0.0', 8070), d)
server.start()

Related

How to add gen.Task module in Tornado webserver running for django backend to allow multiple asynchronous requests

I am trying for the first time to deploy a django website on a microsoft webserver in service mode (without being logged in) using nssm.
To do so, it seems I can't use the usual
python manage.py runserver 0.0.0.0
So I have tried to add a new tornado.py file in my project and the nssm would point to it:
from tornado.wsgi import WSGIContainer
from tornado.ioloop import IOLoop
from tornado.web import FallbackHandler, RequestHandler, Application, StaticFileHandler
from wsgi import application
from mySite.settings import *
class MainHandler(RequestHandler):
def get(self):
self.write("Hi Tornado")
tr = WSGIContainer(application)
app = Application([
(r"/tornado", MainHandler),
(r"/static/(.*)", StaticFileHandler, {'path': STATIC_ROOT}),
(r"/media/(.*)", StaticFileHandler, {'path': MEDIA_ROOT}),
(r".*", FallbackHandler, dict(tr)),
])
if __name__ == '__main__':
app.listen(8000)
IOLoop.instance().start()
The nssm pointing to this file makes the deployment okay, but unfortunately, I have some long requests calling other APIs in the back-end django and when I call one of these long services, it is impossible to make an other request i.e I have to wait for the first request to be finished.
From what I have seen on other questions related to the same issue on this website, I need to add the tornado decorator below someway:
#gen
I have tried a couple of options without success and can't find an example working with django.
I use django only for the mvc framework, but have no use of the orm or the auth.
I would have to keep django because there is a lot of work in it and microsoft webserver/nssm because it is the best practice in my company, but is tornado able to help me on this? Am I looking in the right direction?
Thanks in advance to all those who would take time to help me on this problem.
Heed the warning on the WSGIContainer docs: Tornado's WSGIContainer has no parallelism and is almost certainly a worse choice than other WSGI servers like gunicorn or uwsgi for WSGI-based applications.
The #gen.coroutine is for native Tornado applications; it is not available in any useful way for foreign applications running inside a WSGIContainer.
Tornado's Windows support is also limited.

Multithreaded Flask application on Apache server

I have a python script.
Main thread (if name=='main', etc): when the main thread initiates, it runs several threads to listen to data streams, events, and to process them. The main thread then starts running the Flask application (app.run()). Processing and data is sent to the front-end Flask app (no issues here)
The Apache Server and mod_wsgi requires me to directly import the app, meaning that my other threads won't run.
My dilemma. In the examples I've seen, the .wsgi script from someapp imports app as application. This would only run the flask application. If I managed to somehow run the python script instead as main, the flask application would be ran on localhost:5000 by default and is not recommended in production to change or use .run().
First of all, is it possible to get this application on a server in this current structure? How would I get the whole application to work on a server? Would I need to completely restructure it? Is it not possible to specify host: 0.0.0.0 port:80 then run the python script instead of just importing the app? Any help is appreciated, any forwarding to other documentations.
Edit: for the sake of testing, I will be using AWS Ubuntu (any other linux distro can be used/switched to if needed).
Sort and misleading answer is yes, it is possible (make sure there is any other program that uses port 80 such as apache etc):
if __name__ == '__main__':
app.run(host='0.0.0.0', port=80)
However, you should not do that. Not recommended as it states in the documentation:
You can use the builtin server during development, but you should use
a full deployment option for production applications. (Do not use the
builtin development server in production.)
Proxy HTTP traffic through apache2 to Flask is much better.
This way, apache2 can handle all your static files and act as a reverse proxy for your dynamic content, passing those requests to Flask.
To have threads check the documentation of WSGIDaemonProcess.
Example of Apache/mod_wsgi configuration should looks like this:
WSGIDaemonProcess mysite processes=3 threads=2 display-name=mod_wsgi
WSGIProcessGroup mysite
WSGIScriptAlias / /some/path/wsgi.py
I managed to find an answer to this without diverging too far from guides on how to get a Flask application working with Python3 and Apache2.
In short, when you initialise Flask, you most likely do something like this:
from flask import Flask
app = Flask(__name__)`
The proposed solution:
import atexit #for detecting flask exit
import threading
from flask import Flask
shareddata = 0
running = False
def init_app():
global shareddata
global running
running = True
app = Flask(__name__)
# some threading goes here
# e.g.
def jointhread():
running=False
t.join()
def MyThread1():
while(running):
#do something
t1 = threading.Thread(target=MyThread1, args=[])
t1.start()
atexit.register(jointhread)
return app
app = init_app()
Threading might not work, whichever's applicable.
I had a similar issue where there was a thread I wanted to constantly monitor data using an API. I ended up importing the function(s) I wanted threaded to my WSGI file and kicked them off there.
Example
import threading
from main import <threaded_function>
my_thread = threading.Thread(target=<threaded_function>)
my_thread.start()

Invoking a pyramid framework application from inside another application

I have a Python application running in a framework that drives a network protocol to control remote devices. Now I want to add a browser-based monitoring and control and I am looking at the Pyramid framework to build it.
Normally you start a Pyramid application using pserve from a command line, but I can't find any documentation or examples for how to invoke it inside a host application framework. This needs to be done in such a way that the Pyramid code can access objects in the host application.
Is this a practical use case for Pyramid or should I be looking for some other WSGI-based framework to do this?
A WSGI app is basically a function which receives some input and returns a response, you don't really need pserve to serve a WSGI app, it's more like a wrapper which assembles an application from an .ini file.
Have a look at Creating Your First Pyramid Application chapter in Pyramid docs:
from wsgiref.simple_server import make_server
from pyramid.config import Configurator
from pyramid.response import Response
def hello_world(request):
return Response('Hello %(name)s!' % request.matchdict)
if __name__ == '__main__':
config = Configurator()
config.add_route('hello', '/hello/{name}')
config.add_view(hello_world, route_name='hello')
app = config.make_wsgi_app()
server = make_server('0.0.0.0', 8080, app)
server.serve_forever()
the last two lines create a server which listens on port 8080.
Now, the trickier problem is that the serve_forever call is blocking, i.e.the program stops on that line until you hit Ctrl-C and stop the script. This makes it a bit non-trivial to have your program to "drive a network protocol to control remote devices" and to serve web pages at the same time (this is unlike other event-based platforms such as Node.js where it's trivial to have two servers to listen on different ports within the same process).
One possible solution to this problem would be to run the webserver in a separate thread.

REST web service with Python using WSME

I'm trying to create a simple REST Web Service using technology WSME reported here:
https://pypi.python.org/pypi/WSME
It's not clear, however, how to proceed. I have successfully installed the package WSME.0.6.4 but I don't understand how to proceed.
On the above link we can see some python code. If I wanted to test the code what should I do? I have to create a .py file? Where this file should be saved? Are there services to be started?
The documentation is not clear: it says "With this published at the / ws path of your application". What application? Do I need to install a Web Server?
Thanks.
You could use a full blown web server to run your application . For example Apache with mod_wsgi or uWSGI , but it is not always necessary .
Also you should choose a web framework to work with .
According WSME doc's it has support for Flask microframework out of the box , which is simple enough to start with .
To get started create a file with the following source code :
from wsgiref.simple_server import make_server
from wsme import WSRoot, expose
class MyService(WSRoot):
#expose(unicode, unicode)
def hello(self, who=u'World'):
return u"Hello {0} !".format(who)
ws = MyService(protocols=['restjson', 'restxml'])
application = ws.wsgiapp()
httpd = make_server('localhost', 8000, application)
httpd.serve_forever()
Run this file and point your web browser to http://127.0.0.1:8000/hello.xml?who=John
you should get <result>Hello John !</result> in response.
In this example we have used python's built in webserver which is a good choice when you need to test something out quickly .
For addition i suggest reading How python web frameworks and WSGI fit together

How to use CherrPy as Web server and Bottle as Application to support multiple virtual hosts?

I have a website (which running in Amazon EC2 Instance) running Python Bottle application with CherryPy as its front end web server.
Now I need to add another website with a different domain name already registered. To reduce the cost, I want to utilize the existing website host to do that.
Obviously, virtual host is the solution.
I know Apache mod_wsgi could play the trick. But I don't want to replace CherryPy.
I've googled a a lot, there are some articles showing how to make virtual hosts on CherryPy, but they all assume Cherrypy as Web Sever + Web application, Not CherrPy as Web server and Bottle as Application.
How to use CherrPy as Web server and Bottle as Application to support multiple virtual hosts?
As you mentioned, use VirtualHost. In the example cherrypy.Application instances are used, but any WSGI callable (e. g. Bottle app) will do.
perhaps you can simply put nginx as reverse proxy and configure it to send the traffic to the two domains to the right upstream (the cherryPy webserver).
Another idea would be to use Nginx (http://wiki.nginx.org/Main) with uWsgi(http://projects.unbit.it/uwsgi/) & (uWsgi-python) plug-in
uWsgi has a module named emperor that you can link vhosts(vassals) in, sort of.
i'm a newbie at this myself, so not necessarily an answer but rather a suggestion to check it out.
just a heads up, uWsgi and Nginx can be a hassle to get it to work, depending on your linux distro. Does work nicely with bottle, tested it myself.
hope it helps
jwalker's answer is pretty clear. In case any CherryPy newbie need whole script for reference, I post one below.
import cherrypy
from bottle import Bottle
import os
app1 = Bottle()
app2 = Bottle()
#app1.route('/')
def homePage():
return "========= home1 ==============="
#app2.route('/')
def homePage_2():
return "========= home2 ==============="
vhost = cherrypy._cpwsgi.VirtualHost(None,
domains={
'www.domain1.com': app1,
'www.domain2.com': app2,
}
)
cherrypy.tree.graft(vhost)
cherrypy.config.update({
'server.socket_host': '192.168.1.4',
'server.socket_port': 80,
})
cherrypy.engine.start()
cherrypy.engine.block()
you could make www.domain1.com and www.domain1.com point to one IP adress of you server, so it servers for 2 domain in one Web Server.

Categories