My end goal is to implement a WebSocket server using python.
I'm accomplishing this by importing tornado in my python scripts. I've also installed mod_wsgi in apache, and their script outputs Hello World!, so WSGI seems to be working fine. Tornado is also working fine as far as I can tell.
The issue comes when I use tornado's wsgi "Hello, world" script:
import tornado.web
import tornado.wsgi
import wsgiref.simple_server
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.write("Hello, world")
if __name__ == "__main__":
application = tornado.wsgi.WSGIApplication([
(r"/", MainHandler),
])
server = wsgiref.simple_server.make_server('', 8888, application)
server.serve_forever()
First, I get a 500 error and the log tells me WSGI can't find 'application'.
So I remove if __name__ == "__main__", and the page loads infinitely.
I assume this is because of server.serve_forever() so I removed it in an attempt to see Hello, world
But now I just get 404: Not Found. It's not my apache 404 page, and I know that the server can find my main .wsgi file...
You can't use websockets with Tornado's WSGIApplication. To use Tornado's websocket support you have to use Tornado's HTTPServer, not apache.
The WSGIApplication handlers are relative to the webserver root. If your application url is /myapp, your 'application' must look like this:
application = tornado.wsgi.WSGIApplication([
(r"/myapp", MainHandler),
(r"/myapp/login/etc", LoginEtcHandler),
])
Oh, and it seems like the documentation is shit (as usual) and __name__ will look something like this when running under apache: _mod_wsgi_8a447ce1677c71c08069303864e1283e.
So! a correct "Hello World" python script will look like this:
/var/www/wsgi-scripts/myapp.wsgi:
import tornado.web
import tornado.wsgi
import wsgiref.simple_server
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.write('Hello World')
application = tornado.wsgi.WSGIApplication([
(r"/myapp", MainHandler),
])
And in the apache config (not .htaccess):
WSGIScriptAlias /myapp /var/www/wsgi-scripts/myapp.wsgi
To use tornado in apache,add a mod-wsgi plugin to apache.
apt-get install libapache2-mod-wsgi
Write a tornado wsgi server with
.wsgi
NOTE:Dont use__name__
Configure the apache.conf to run your server.To configure use this mod-wsgi guide
If you still want to combine them both, you can use Apache as a proxy that will just be the 1st point in front of the user - but actually reroute the traffic to your local Tornado server ( In / Out )
In my case for example, my Apache listen in port 443 ( some default config )
Then I run my tornado in port 8080, and given a path - will redirect
#File: conf.d/myapp.conf
<VirtualHost *:80>
ErrorLog "logs/myapp_error_log"
ProxyPreserveHost On
ProxyRequests off
ProxyPreserveHost On
<Proxy *>
Require all granted
</Proxy>
RewriteEngine on
RewriteCond %{REQUEST_METHOD} ^TRACE
RewriteRule .* - [F]
ProxyPassMatch "/myapp/(.*)" "http://localhost:8080/myapp/$1"
ProxyPassReverse "/myapp/" "http://localhost:8080/myapp/"
</VirtualHost>
If you're using RedHat "family" OS also turn on the ability to forward network connections:
setsebool -P httpd_can_network_connect 1
Related
I have a server written in Python 2.7/Tornado and I am trying to deploy it on AWS.
I came across AWS Elastic Beanstalk which looked like a very convenient method to deploy my code.
I went through this tutorial and was able to deploy the Flask sample app.
However, I can't figure out how to deploy a test tornado app like below.
import tornado.web
import tornado.ioloop
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.write("Hello, world")
if __name__ == "__main__":
app = tornado.web.Application([
(r"/.*", MainHandler),
])
app.listen(5000)
tornado.ioloop.IOLoop.current().start()
All my requests result in an Error 500 when I try to deploy the above application and I have no idea how to troubleshoot this problem as I have no idea how the Flask sample is working but the Tornado code is not.
The requirements.txt file has an entry for tornado==4.4.2 in it.
I tried adding a few log statements to write to an external file but the file is not being created, which probably means the application does not even start.
It would be great if someone can provide some steps on deploying a Tornado app on AWS-EB or how I should start troubleshooting this.
Please let me know if I need to provide any more details.
Thanks!
Update
After noticing the errors in httpd error_log file, AWS Documentation and Berislav Lopac's answer, I found the correct way to implement the Tornado server.
Here is a simple server
import tornado.web
import tornado.wsgi
import tornado.ioloop
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.write("Hello, world")
webApp = tornado.web.Application([
(r"/", MainHandler),
])
# Wrapping the Tornado Application into a WSGI interface
# As per AWS EB requirements, the WSGI interface must be named
# 'application' only
application = tornado.wsgi.WSGIAdapter(webApp)
if __name__ == '__main__':
# If testing the server locally, start on the specific port
webApp.listen(8080)
tornado.ioloop.IOLoop.current().start()
Additional Links:
Tornado WSGI Documentation
you can deploy tornado application with WSGI mod
import tornado.web
import tornado.wsgi
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.write("Hello, world")
tornado_app = tornado.web.Application([
(r"/", MainHandler),
])
application = tornado.wsgi.WSGIAdapter(tornado_app)
http://www.tornadoweb.org/en/stable/guide/running.html
I believe your issue is related to the fact that Elastic Beanstalk uses WSGI for serving Python Web apps, while Tornado's server is not WSGI-compliant. You might want to try wrapping your app in the WSGI adapter before serving it via WSGI.
This should work fine unless you rely on Tornado's asynchronous capabilities, as WSGI is strictly synchronous.
I'm running a cherrypy based app on an openshift gear. Recently I've been getting a "503 service temporarily unavailable" error whenever I try to go to the site. Inspecting the logs, I see I'm getting an ImportError where I try to import CherryPy. This is strange - CherryPy is listed as a dependency in my requirements.txt and used to be imported just fine. I double checked to make sure I'm getting the right path to the openshift activate_this.py and it seems to be correct. I'm not quite sure where to look next; any help would be appreciated. Thanks!
The failed import is at line 14 of app.py:
import os
import files
virtenv = os.path.join(os.environ['OPENSHIFT_PYTHON_DIR'], 'virtenv')
virtualenv = os.path.join(virtenv, 'bin', 'activate_this.py')
conf = os.path.join(files.get_root(), "conf", "server.conf")
try:
execfile(virtualenv, dict(__file__=virtualenv))
print virtualenv
except IOError:
pass
import cherrypy
import wsgi
def mount():
def CORS():
cherrypy.response.headers["Access-Control-Allow-Origin"] = os.environ['OPENSHIFT_APP_DNS']
cherrypy.config.update({"tools.staticdir.root": files.get_root()})
cherrypy.tools.CORS = cherrypy.Tool('before_handler', CORS)
cherrypy.tree.mount(wsgi.application(), "/", conf)
def start():
cherrypy.engine.start()
def end():
cherrypy.engine.exit()
if __name__ == "__main__":
mount()
start()
UPDATE
I eventually saw (when pushing to the openshift repo using git bash CLI) that the dependency installation from requirements.txt was failing with some exceptions I haven't bothered to look into yet. It then goes on to try to install dependencies in setup.py, and that works just fine.
Regarding the port in use issue...I have no idea. I changed my startup from tree.mount and engine.start to quickstart, and everything worked when I pushed to openshift. Just for kicks (and because I need it to run my tests), I switched back to cherrypy.tree.mount, pushed it, and it worked just fine.
Go figure.
I use the app.py entry point for Openshift. Here are several examples on how I start my server using the pyramid framework on Openshift. I use waitress as the server but I have also used the cherrypy wsgi server. Just comment out the code you don't want.
app.py
#Openshift entry point
import os
from pyramid.paster import get_app
from pyramid.paster import get_appsettings
if __name__ == '__main__':
here = os.path.dirname(os.path.abspath(__file__))
if 'OPENSHIFT_APP_NAME' in os.environ: #are we on OPENSHIFT?
ip = os.environ['OPENSHIFT_PYTHON_IP']
port = int(os.environ['OPENSHIFT_PYTHON_PORT'])
config = os.path.join(here, 'production.ini')
else:
ip = '0.0.0.0' #localhost
port = 6543
config = os.path.join(here, 'development.ini')
app = get_app(config, 'main') #find 'main' method in __init__.py. That is our wsgi app
settings = get_appsettings(config, 'main') #don't really need this but is an example on how to get settings from the '.ini' files
# Waitress (remember to include the waitress server in "install_requires" in the setup.py)
from waitress import serve
print("Starting Waitress.")
serve(app, host=ip, port=port, threads=50)
# Cherrypy server (remember to include the cherrypy server in "install_requires" in the setup.py)
# from cherrypy import wsgiserver
# print("Starting Cherrypy Server on http://{0}:{1}".format(ip, port))
# server = wsgiserver.CherryPyWSGIServer((ip, port), app, server_name='Server')
# server.start()
#Simple Server
# from wsgiref.simple_server import make_server
# print("Starting Simple Server on http://{0}:{1}".format(ip, port))
# server = make_server(ip, port, app)
# server.serve_forever()
#Running 'production.ini' method manually. I find this method the least compatible with Openshift since you can't
#easily start/stop/restart your app with the 'rhc' commands. Mabye somebody can suggest a better way :)
# #Don't forget to set the Host IP in 'production.ini'. Use 8080 for the port for Openshift
# You will need to use the 'pre_build' action hook(pkill python) so it stops the existing running instance of the server on OS
# You also will have to set up another custom action hook so rhc app-restart, stop works.
# See Openshifts Origin User's Guide ( I have not tried this yet)
#Method #1
# print('Running pserve production.ini')
# os.system("pserve production.ini &")
#Method #2
#import subprocess
#subprocess.Popen(['pserve', 'production.ini &'])
I am running twisted.web.server on localhost at port 8001 and apache2 with mod_proxy.
Apache is set to proxy according to the following config
http://localhost/jarvis ----> http://localhost:8001/
The httpd config for this rule is
ProxyPass /jarvis http://localhost:8001/
ProxyPassReverse /jarvis http://localhost:8001/
The twisted app's code fragment for server config is as follows:
if __name__ == '__main__':
root = Resource()
root.putChild("clientauth", boshProtocol())
logging.basicConfig()
factory = Site(root)
reactor.listenTCP(8001, factory)
reactor.run()
When I go to
http://localhost:8001/clientauth
it runs as expected.
However when I use
http://localhost/jarvis/clientauth
It give the error - "No such child resource."
As i understand - the request is correctly proxied to the twisted web server. But why is the child resource not identified?
You are missing a RewriteRule. I haven't tested it, but the fix for your problem is more or less like this:
RewriteRule ^/jarvis/(.*) /$1
Be sure to have mod_rewrite enabled.
Here is a link I usually use for reference: http://httpd.apache.org/docs/2.0/misc/rewriteguide.html
Good luck!
I have wary odd problem. I configured Lighttpd to pass /test to fastcgi backend.
just added this in config
fastcgi.server = ("/test" =>
("127.0.0.1" =>
(
"host" => "127.0.0.1",
"port" => 7101,
"docroot" => "/",
"check-local" => "disable"
)
)
)
Now, when i start flup example, and hit 127.0.0.1:80/test everything work fine. Tested uWSGI to, still fine.
flup example:
#!/usr/bin/env python
from flup.server.fcgi import WSGIServer
def myapp(environ, start_response):
start_response('200 OK', [('Content-Type', 'text/plain')])
return ['Hello World']
WSGIServer(myapp, bindAddress = ('127.0.0.1',7101)).run()
Now, only problem is when I start gevent it won't work. Lighttpd mod_fastcgi says that backend just blocked.
Funny part is when I alter handler to return just string, cause WSGI require iterable, and hit 127.0.0.1:7101 from my browser it work as expected. This should be WSGIServer, how can it work this way?
Here is gevent code:
#!/usr/bin/python
"""WSGI server example"""
from gevent.wsgi import WSGIServer
def app(env, start_response):
start_response('200 OK', [('Content-Type','text/html')])
#return ["Hello World", StopIteration] # this is WSGI test, not working
return "Hello World"
# when set like this, frontend :80 still wont work (500 Internal error),
# but 127.0.0.1:7101 work like standard http
if __name__ == '__main__':
WSGIServer(('', 7101), app).serve_forever()
Bottom line is , why only gevent wont work in this setup, and both flup and uWSGI are working? Is there some secret setting not mention in official example here.
Because gevent.wsgi.WSGIServer is not a fcgi server, it's only http server. Your can proxy your requests from lighttpd to gevent as http, or use wsgi.
U can see that flup here states it SPEAK FastCGI (not HTTP), and uWSGI here says "Born as a WSGI-only server".
Now Gevent says here "Fast WSGI server based on libevent-http", that confused me, but then I try gunicorn, and it steel failed.
Then i found here "Gunicorn 'Green Unicorn' is a Python WSGI HTTP Server for UNIX". That means that gevent and gunicorn WSGI handlers are HTTP request not FastCGI, but ,as Fedor Gogolev said, for your handlers they are WSGI servers.
So for Flup and uWSGI u configure lighttpd (or any other web server) to use fastcgi module, but for gunicorn and gevent u use proxy module, and for them u don't have to use frontend at all!If don't have static stuff to serve or other reason u can omit frontend cause gunicorn state it is wary fast and stable.
This question already has answers here:
Deploy flask application on 1&1 shared hosting (with CGI)
(3 answers)
Closed 4 years ago.
I have written a small application using the Flask framework. I try to host this using cgi. Following the documentation I created a .cgi file with the following content:
#!/usr/bin/python
from wsgiref.handlers import CGIHandler
from yourapplication import app
CGIHandler().run(app)
Running the file results in following error:
...
File "/usr/lib/pymodules/python2.7/werkzeug/routing.py", line 1075, in bind_to_environ
wsgi_server_name = environ.get('HTTP_HOST', environ['SERVER_NAME'])
KeyError: 'SERVER_NAME'
Status: 500 Internal Server Error
Content-Type: text/plain
Content-Length: 59
In my application I have set:
app.config['SERVER_NAME'] = 'localhost:5000'
When I run the application with the Flask development server it works perfectly well.
As you can tell I'm very new to this stuff and I have search for others with similar errors but with no luck. All help is appreciated.
I will try to show what I've done and it is working in Godaddy sharing host account:
In the cgi-bin folder in MYSITE folder, I added the following cgi file:
#!/home/USERNAME/.local/bin/python3
from wsgiref.handlers import CGIHandler
from sys import path
path.insert(0, '/home/USERNAME/public_html/MYSITE/')
from __init__ import app
class ProxyFix(object):
def __init__(self, app):
self.app = app
def __call__(self, environ, start_response):
environ['SERVER_NAME'] = ""
environ['SERVER_PORT'] = "80"
environ['REQUEST_METHOD'] = "GET"
environ['SCRIPT_NAME'] = ""
environ['QUERY_STRING'] = ""
environ['SERVER_PROTOCOL'] = "HTTP/1.1"
return self.app(environ, start_response)
if __name__ == '__main__':
app.wsgi_app = ProxyFix(app.wsgi_app)
CGIHandler().run(app)
As you can see the init file in the MYSITE folder have the flask app.
The most important thing is to set the permissions right. I setted 755 to this folder permission AS WELL AS to "/home/USERNAME/.local/bin/python3" folder!! Remember that the system needs this permission to open flask.
To open the cgi I have the following .htaccess file in MYSITE folder:
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^(.*)$ /home/USERNAME/public_html/MYSITE/cgi-bin/application.cgi/$1 [L]
So it will render the cgi file when someone enters your page.
This is posted as an answer following the comments above for the sake of completeness.
As discussed above, cgi scripts should execute by some server. Here's the abstract from CGI 1.1 RFC:
The Common Gateway Interface (CGI) is a simple interface for running
external programs, software or gateways under an information server in
a platform-independent manner. Currently, the supported information
servers are HTTP servers.
For the environment variables (which were missing and triggered the error) see sectuib 4.1 in the RFC.