running cherrypy with mod_wsgi on apache along with another php app. The cherrypy app is NOT mounted on root, but rather on something like 'localhost/apps/myapp' via WSGIScriptAlias in the apache config file.
In testapp.py, I have tried the following, and when I try to access localhost/apps/myapp in a browser:
app = cherrypy.tree.mount(MyApp(), '', 'settings.config') #FAILS WITH 404
and
app = cherrypy.tree.mount(MyApp(), '/apps/myapp', 'settings.config') # WORKS
The first case fails because cherrypy expects to be at the server root, instead of relative to where it is mounted via WSGI in apache.
Is there a preferred way to make cherrypy apps work relative to the path they are mounted in apache under WSGIScriptAlias?
Basically, I'll be running several cherrypy apps under several different paths, and would prefer if apache handled the dispatching (i.e. cherrypy just runs the app and doesn't worry about the relative path). This way i can avoid updating several different python files/config files everytime some of the relative paths on the server change.
Any suggestions?
btw, the cherrypy app is currently passed to the wsgi application as follows:
app = cherrypy.tree.mount(HelloWorld(), '', 'settings.config')
return app(environ, start_response)
I am doing this, although this would require cherrypy to know the relative path:
class Dir: pass
root = Dir()
root.apps = Dir()
root.apps.myapp = MyApp()
cherrypy.tree.mount(root)
This allows me to structure the application in any way I need. I'm using nginx and not Apache, but I don't think it'll make any difference. Although it gets a bit wordy if you're using long paths with not much else in between.
cherrypy can support other dispatchers which might be better suited to what you're trying to do, or perhaps you need to write a custom one.
how should this
app = cherrypy.tree.mount(MyApp(), '', 'settings.config')
resolve http://localhost/apps/myapp ?
have u tried http://localhost/ or http://localhost/MyApp.
it is also important where u have defined your WSGIScriptAlias in Apache.
vhost, location?
Related
I want to make the setup of my Flask application as easy as possible by providing a custom CLI command for automatically generating SECRET_KEY and saving it to .env. However, it seems that invoking a command, creates an instance of the Flask app, which then needs the not-yet-existing configuration.
Flask documentation says that it's possible to run commands without application context, but it seems like the app is still created, even if the command doesn't have access to it. How can I prevent the web server from running without also preventing the CLI command?
Here's almost-working code. Note, that it requires the python-dotenv package, and it runs with flask run instead of python3 app.py.
import os
import secrets
from flask import Flask, session
app = Flask(__name__)
#app.cli.command(with_appcontext=False)
def create_dotenv():
cwd = os.getcwd()
filename = '.env'
env_path = os.path.join(cwd, filename)
if os.path.isfile(env_path):
print('{} exists, doing nothing.'.format(env_path))
return
with open(env_path, 'w') as f:
secret_key = secrets.token_urlsafe()
f.write('SECRET_KEY={}\n'.format(secret_key))
#app.route('/')
def index():
counter = session.get('counter', 0)
counter += 1
session['counter'] = counter
return 'You have visited this site {} time(s).'.format(counter)
# putting these under if __name__ == '__main__' doesn't work
# because then they are not executed during `flask run`.
secret_key = os.environ.get('SECRET_KEY')
if not secret_key:
raise RuntimeError('No secret key')
app.config['SECRET_KEY'] = secret_key
Some alternative options I considered:
I could allow running the app without a secret key, so the command works fine, but I don't want to make it possible to accidentally run the actual web server without a secret key. That would result in an error 500 when someone tries to use routes that have cookies and it might not be obvious at the time of starting the server.
The app could check the configuration just before the first request comes in as suggested here, but this approach would not be much better than the previous option for the ease of setup.
Flask-Script is also suggested, but Flask-Script itself says it's no longer mainained and points to Flask's built-in CLI tool.
I could use a short delay before killing the application if the configuration is missing, so the CLI command would be able to run, but a missing secret key would be easy to notice when trying to run the server. This would be quite a cursed approach though and who knows maybe even be illegal.
Am I missing something? Is this a bug in Flask or should I do something completely different for automating secret key generation? Is this abusing the Flask CLI system's philosophy? Is it bad practice to generate environment files in the first place?
As a workaround, you can use a separate Python/shell script file for generating SECRET_KEY and the rest of .env.
That will likely be the only script that needs to be able to run without the configuration, so your repository shouldn't get too cluttered from doing so. Just mention the script in README and it probably doesn't result in a noticeably different setup experience either.
I think you are making it more complicated than it should be.
Consider the following code:
import os
app.config['SECRET_KEY'] = os.urandom(24)
It should be sufficient.
(But I prefer to use config files for Flask).
AFAIK the secret key does not have to be permanent, it should simply remain stable for the lifetime of Flask because it will be used for session cookies and maybe some internal stuff but nothing critical to the best of my knowledge.
If your app were interrupted users would lose their session but no big deal.
It depends on your current deployment practices, but if you were using Ansible for example you could automate the creation of the config file and/or the environment variables and also make some sanity checks before starting the service.
The problem with your approach is that as I understand it is that you must give the web server privileges to write to the application directory, which is not optimal from a security POV. The web user should not have the right to modify application files.
So I think it makes sense to take care of deployment separately, either using bash scripts or Ansible, and you can also tighten permissions and automate other stuff.
I agree with precedent answers, anyway you could write something like
secret_key = os.environ.get('SECRET_KEY') or secrets.token_urlsafe(32)
so that you can still use your configured SECRET_KEY variable from environment. If, for any reason, python doesn't find the variable it will be generated by the part after the 'or'.
I have a website (which running in Amazon EC2 Instance) running Python Bottle application with CherryPy as its front end web server.
Now I need to add another website with a different domain name already registered. To reduce the cost, I want to utilize the existing website host to do that.
Obviously, virtual host is the solution.
I know Apache mod_wsgi could play the trick. But I don't want to replace CherryPy.
I've googled a a lot, there are some articles showing how to make virtual hosts on CherryPy, but they all assume Cherrypy as Web Sever + Web application, Not CherrPy as Web server and Bottle as Application.
How to use CherrPy as Web server and Bottle as Application to support multiple virtual hosts?
As you mentioned, use VirtualHost. In the example cherrypy.Application instances are used, but any WSGI callable (e. g. Bottle app) will do.
perhaps you can simply put nginx as reverse proxy and configure it to send the traffic to the two domains to the right upstream (the cherryPy webserver).
Another idea would be to use Nginx (http://wiki.nginx.org/Main) with uWsgi(http://projects.unbit.it/uwsgi/) & (uWsgi-python) plug-in
uWsgi has a module named emperor that you can link vhosts(vassals) in, sort of.
i'm a newbie at this myself, so not necessarily an answer but rather a suggestion to check it out.
just a heads up, uWsgi and Nginx can be a hassle to get it to work, depending on your linux distro. Does work nicely with bottle, tested it myself.
hope it helps
jwalker's answer is pretty clear. In case any CherryPy newbie need whole script for reference, I post one below.
import cherrypy
from bottle import Bottle
import os
app1 = Bottle()
app2 = Bottle()
#app1.route('/')
def homePage():
return "========= home1 ==============="
#app2.route('/')
def homePage_2():
return "========= home2 ==============="
vhost = cherrypy._cpwsgi.VirtualHost(None,
domains={
'www.domain1.com': app1,
'www.domain2.com': app2,
}
)
cherrypy.tree.graft(vhost)
cherrypy.config.update({
'server.socket_host': '192.168.1.4',
'server.socket_port': 80,
})
cherrypy.engine.start()
cherrypy.engine.block()
you could make www.domain1.com and www.domain1.com point to one IP adress of you server, so it servers for 2 domain in one Web Server.
We have a hosting setup where we have one top level domain, and we host web applications under subpaths. For instance:
/projects -> Plone
/interal -> Tomcat
etc
In this scenario we need a way to tell the web application at the back end what its base path is, so that it can correctly generate links to its views and static content. For the examples above this is fine.
We have just started using Pyramid, served by Waitress, but so far we've not figure out how to do this. Is there a clean way to configure this base path in Waitress, or is there a more flexible application server that we can use that will support Pyramid?
Everything in WSGI is relative to the current request. You just have to have your environ setup properly (usually by your WSGI server).
For example your web application will know it is mounted at subpath /projects if request.environ['SCRIPT_NAME'] == '/projects'. If you want your application to be agnostic to its mount point, you can simply code it up as if it serves a view at /foo/bar. Then you mount your application on /projects via some middleware which can mutate the environ properly (mod_wsgi and some other servers should be able to do this for you automatically). Now when the incoming URL is /projects/foo/bar the environ['SCRIPT_NAME'] == '/projects' and environ['PATH_INFO'] == '/foo/bar', and your app can focus on the relative path.
In Pyramid this would boil down to an extra step in your ini where you add the prefix middleware to your WSGI stack. The middleware handles mutating the PATH_INFO and SCRIPT_NAME keys in the environ for you.
[app:myapp]
use = egg:myapp
# ...
[filter:proxy-prefix]
use = egg:PasteDeploy#prefix
prefix = /projects
[pipeline:main]
pipeline =
proxy-prefix
myapp
In my pyramid app, in the .ini config files (production and development) I'm doing something like this:
filter-with = urlprefix
[filter:urlprefix]
use = egg:PasteDeploy#prefix
prefix = /mysubfolder
I think it probably accomplishes the same as Michael's answer above; I'm still relatively new to Pyramid as well and am going off of recipes like you. But the end result is that it creates a base URL of /mysubfolder from my root and the rest of the app is relative to that. This is running under pserve locally and I think nginix on my web host.
repoze.vhm should work just fine for your use case.
I think it won't work if you want to use the virtual root feature. I.e a subpath of your proxied web app (https://hidden.tld/root/ should appear as https://example.com/ )
For exposing your app at a subpath of an external domain repoze.vhm works just fine. IMO the best thing about it is, that you don't need to put any subpath config or whatsoever into your web app deployment. This allows you to change the url to whatever you want on the proxy, or even expose the same app instance on multiple domain names and/or subpaths.
How can I be certain that my application is running on development server or not? I suppose I could check value of settings.DEBUG and assume if DEBUG is True then it's running on development server, but I'd prefer to know for sure than relying on convention.
I put the following in my settings.py to distinguish between the standard dev server and production:
import sys
RUNNING_DEVSERVER = (len(sys.argv) > 1 and sys.argv[1] == 'runserver')
This also relies on convention, however.
(Amended per Daniel Magnusson's comment)
server = request.META.get('wsgi.file_wrapper', None)
if server is not None and server.__module__ == 'django.core.servers.basehttp':
print('inside dev')
Of course, wsgi.file_wrapper might be set on META, and have a class from a module named django.core.servers.basehttp by extreme coincidence on another server environment, but I hope this will have you covered.
By the way, I discovered this by making a syntatically invalid template while running on the development server, and searched for interesting stuff on the Traceback and the Request information sections, so I'm just editing my answer to corroborate with Nate's ideas.
Usually this works:
import sys
if 'runserver' in sys.argv:
# you use runserver
Typically I set a variable called environment and set it to "DEVELOPMENT", "STAGING" or "PRODUCTION". Within the settings file I can then add basic logic to change which settings are being used, based on environment.
EDIT: Additionally, you can simply use this logic to include different settings.py files that override the base settings. For example:
if environment == "DEBUG":
from debugsettings import *
Relying on settings.DEBUG is the most elegant way AFAICS as it is also used in Django code base on occasion.
I suppose what you really want is a way to set that flag automatically without needing you update it manually everytime you upload the project to production servers.
For that I check the path of settings.py (in settings.py) to determine what server the project is running on:
if __file__ == "path to settings.py in my development machine":
DEBUG = True
elif __file__ in [paths of production servers]:
DEBUG = False
else:
raise WhereTheHellIsThisServedException()
Mind you, you might also prefer doing this check with environment variables as #Soviut suggests. But as someone developing on Windows and serving on Linux checking the file paths was plain easier than going with environment variables.
I came across this problem just now, and ended up writing a solution similar to Aryeh Leib Taurog's. My main difference is that I want to differentiate between a production and dev environments when running the server, but also when running some one-off scripts for my app (which I run like DJANGO_SETTINGS_MODULE=settings python [the script] ). In this case, simply looking at whether argv[1] == runserver isn't enough. So what I came up with is to pass an extra command-line argument when I run the devserver, and also when I run my scripts, and just look for that argument in settings.py. So the code looks like this:
if '--in-development' in sys.argv:
## YES! we're in dev
pass
else:
## Nope, this is prod
pass
then, running the django server becomes
python manage.py runserver [whatever options you want] --in-development
and running my scripts is as easy as
DJANGO_SETTINGS_MODULE=settings python [myscript] --in-development
Just make sure the extra argument you pass along doens't conflict with anything django (in reality I use my app's name as part of the argument).
I think this is pretty decent, as it lets me control exactly when my server and scripts will behave as prod or dev, and I'm not relying on anyone else's conventions, other than my own.
EDIT: manage.py complains if you pass unrecognized options, so you need to change the code in settings.py to be something like
if sys.argv[0] == 'manage.py' or '--in-development' in sys.argv:
# ...
pass
Although this works, I recognize it's not the most elegant of solutions...
If you want to switch your settings files automatically dependent on the runtime environment
you could just use something that differs in environ, e.g.
from os import environ
if environ.get('_', ''):
print "This is dev - not Apache mod_wsgi"
You can determine whether you're running under WSGI (mod_wsgi, gunicorn, waitress, etc.) vs. manage.py (runserver, test, migrate, etc.) or anything else:
import sys
WSGI = 'django.core.wsgi' in sys.modules
settings.DEBUG could be True and running under Apache or some other non-development server. It will still run. As far as I can tell, there is nothing in the run-time environment short of examining the pid and comparing to pids in the OS that will give you this information.
I use:
DEV_SERVERS = [
'mymachine.local',
]
DEVELOPMENT = platform.node() in DEV_SERVERS
which requires paying attention to what is returned by .node() on your machines. It's important that the default be non-development so that you don't accidentally expose sensitive development information.
You could also look into more complicated ways of uniquely identifying computers.
One difference between the development and deployment environment is going to be the server that it’s running on. What exactly is different will depend on your dev and deployment environments.
Knowing your own dev and deploy environments, the HTTP request variables could be used to distinguish between the two. Look at request variables like request.META.HTTP_HOST, request.META.SERVER_NAME and request.META.SERVER_PORT and compare them in the two environments.
I bet you’ll find something quite obvious that’s different and can be used to detect your development environment. Do the test in settings.py and set a variable that you can use elsewhere.
Inspired by Aryeh's answer, the trick I devised for my own use is to just look for the name of my management script in sys.argv[0]:
USING_DEV_SERVER = "pulpdist/manage_site.py" in sys.argv[0]
(My use case is to automatically enable Django native authentication when running the test server - when running under Apache, even on development servers, all authentication for my current project is handled via Kerberos)
You could check request.META["SERVER_SOFTWARE"] value:
dev_servers = ["WSGIServer", "Werkzeug"]
if any(server in request.META["SERVER_SOFTWARE"] for server in dev_servers):
print("is local")
Simple you may check the path you work on server. Something like:
import os
SERVER = True if os.path.exists('/var/www/your_project') else False
I've created a web.py application, and now that it is ready to be deployed, I want to run in not on web.py's built-in webserver. I want to be able to run it on different webservers, Apache or IIS, without having to change my application code. This is where WSGI is supposed to come in, if I understand it correctly.
However, I don't understand what exacly I have to do to make my application deployable on a WSGI server? Most examples assume you are using Pylons/Django/other-framework, on which you simply run some magic command which fixes everything for you.
From what I understand of the (quite brief) web.py documentation, instead of running web.application(...).run(), I should use web.application(...).wsgifunc(). And then what?
Exactly what you need to do to host it with a specific WSGI hosting mechanism varies with the server.
For the case of Apache/mod_wsgi and Phusion Passenger, you just need to provide a WSGI script file which contains an object called 'application'. For web.py 0.2, this is the result of calling web.wsgifunc() with appropriate arguments. For web.py 0.3, you instead use wsgifunc() member function of object returned by web.application(). For details of these see mod_wsgi documentation:
http://code.google.com/p/modwsgi/wiki/IntegrationWithWebPy
If instead you are having to use FASTCGI, SCGI or AJP adapters for a server such as Lighttpd, nginx or Cherokee, then you need to use 'flup' package to provide a bridge between those language agnostic interfaces and WSGI. This involves calling a flup function with the same WSGI application object above that something like mod_wsgi or Phusion Passenger would use directly without the need for a bridge. For details of this see:
http://trac.saddi.com/flup/wiki/FlupServers
Important thing is to structure your web application so that it is in its own self contained set of modules. To work with a particular server, then create a separate script file as necessary to bridge between what that server requires and your application code. Your application code should always be outside of the web server document directory and only the script file that acts as bridge would be in server document directory if appropriate.
As of July 21 2009, there is a much fuller installation guide at the webpy install site, that discusses flup, fastcgi, apache and more. I haven't yet tried it, but it seems like it's much more detailed.
Here is an example of two hosted apps using cherrypy wsgi server:
#!/usr/bin/python
from web import wsgiserver
import web
# webpy wsgi app
urls = (
'/test.*', 'index'
)
class index:
def GET(self):
web.header("content-type", "text/html")
return "Hello, world1!"
application = web.application(urls, globals(), autoreload=False).wsgifunc()
# generic wsgi app
def my_blog_app(environ, start_response):
status = '200 OK'
response_headers = [('Content-type','text/plain')]
start_response(status, response_headers)
return ['Hello world! - blog\n']
"""
# single hosted app
server = wsgiserver.CherryPyWSGIServer(
('0.0.0.0', 8070), application,
server_name='www.cherrypy.example')
"""
# multiple hosted apps with WSGIPathInfoDispatcher
d = wsgiserver.WSGIPathInfoDispatcher({'/test': application, '/blog': my_blog_app})
server = wsgiserver.CherryPyWSGIServer(('0.0.0.0', 8070), d)
server.start()