Is there a way to have mod_wsgi reload all modules (maybe in a particular directory) on each load?
While working on the code, it's very annoying to restart apache every time something is changed. The only option I've found so far is to put modname = reload(modname) below every import.. but that's also really annoying since it means I'm going to have to go through and remove them all at a later date..
The link:
http://code.google.com/p/modwsgi/wiki/ReloadingSourceCode
should be emphasised. It also should be emphaised that on UNIX systems daemon mode of mod_wsgi must be used and you must implement the code monitor described in the documentation. The whole process reloading option will not work for embedded mode of mod_wsgi on UNIX systems. Even though on Windows systems the only option is embedded mode, it is possible through a bit of trickery to do the same thing by triggering an internal restart of Apache from the code monitoring script. This is also described in the documentation.
The following solution is aimed at Linux users only, and has been tested to work under Ubuntu Server 12.04.1
To run WSGI under daemon mode, you need to specify WSGIProcessGroup and WSGIDaemonProcess directives in your Apache configuration file, for example
WSGIProcessGroup my_wsgi_process
WSGIDaemonProcess my_wsgi_process threads=15
More details are available in http://code.google.com/p/modwsgi/wiki/ConfigurationDirectives
An added bonus is extra stability if you are running multiple WSGI sites under the same server, potentially with VirtualHost directives. Without using daemon processes, I found two Django sites conflicting with each other and turning up 500 Internal Server Error's alternatively.
At this point, your server is in fact already monitoring your WSGI site for changes, though it only watches the file you specified using WSGIScriptAlias, like
WSGIScriptAlias / /var/www/my_django_site/my_django_site/wsgi.py
This means that you can force the WSGI daemon process to reload by changing the WSGI script. Of course, you don't have to change its contents, but rather,
$ touch /var/www/my_django_site/my_django_site/wsgi.py
would do the trick.
By utilizing the method above, you can automatically reload a WSGI site in production environment without restarting/reloading the entire Apache server, or modifying your WSGI script to do production-unsafe code change monitoring.
This is particularly useful when you have automated deploy scripts, and don't want to restart the Apache server on deployment.
During development, you may use a filesystem changes watcher to invoke touch wsgi.py every time a module under your site changes, for example, pywatch
The mod_wsgi documentation on code reloading is your best bet for an answer.
I know it's an old thread but this might help someone. To kill your process when any file in a certain directory is written to, you can use something like this:
monitor.py
import os, sys, time, signal, threading, atexit
import inotify.adapters
def _monitor(path):
i = inotify.adapters.InotifyTree(path)
print "monitoring", path
while 1:
for event in i.event_gen():
if event is not None:
(header, type_names, watch_path, filename) = event
if 'IN_CLOSE_WRITE' in type_names:
prefix = 'monitor (pid=%d):' % os.getpid()
print "%s %s/%s changed," % (prefix, path, filename), 'restarting!'
os.kill(os.getpid(), signal.SIGKILL)
def start(path):
t = threading.Thread(target = _monitor, args = (path,))
t.setDaemon(True)
t.start()
print 'Started change monitor. (pid=%d)' % os.getpid()
In your server startup, call it like:
server.py
import monitor
monitor.start(<directory which contains your wsgi files>)
if your main server file is in the directory which contains all your files, you can go like:
monitor.start(os.path.dirname(__file__))
Adding other folders is left as an exercise...
You'll need to 'pip install inotify'
This was cribbed from the code here: https://code.google.com/archive/p/modwsgi/wikis/ReloadingSourceCode.wiki#Restarting_Daemon_Processes
This is an answer to my duplicate question here: WSGI process reload modules
Related
I have created a Gunicorn project, with accesslog and errorlog being specified in a config file, and then the server being started with only a -c flag to specify this config file.
The problem is, each time I restart the same Gunicorn process (via pkill -F <pidfile, also specified in config>), the files specified in these configs are emptied. I've got an info that it's because of the mode in which Gunicorn opens these files being "write", rather than "append", but haven't found anything about in the official settings.
How can I fix it? It's important because I tend to forget manually backing up these logs and had no capacity for automating it so far.
This was my mistake, and mostly unrelated to Gunicorn itself: I had a script that would have created any file not existing but still required, as it could have crashed the app server:
for file in [pidfile, accesslog, errorlog]:
os.makedirs(os.path.dirname(file), exist_ok=True)
f = open(file, "w")
File mode w always emptied the files. Making a rule that uses it for pidfile only, and a for the logfiles solved the problem.
I have a flask application running with uwsgi,nginx,and supervisor.
No matter what I try I cant seem to get the code changes to take effect on the server.
If I run the app locally the changes are there.
If I stop and start uwsgi, the changes take effect.
If I restart the supervisor service the changes don't take effect.
I know the code has the changes because I log in and see the changes that I made but its still running the old code at specific routes.
If I change the title of my page those changes take effect right away but for my webhook end points, they never seem to change.
Here are my config files.
app.ini
[uwsgi]
module = wsgi
master = true
processes = 5
socket = app.sock
chmod-socket = 660
vacuum = true
die-on-term = true
supervisor
[program:app.io]
command=/home/www/beta/v_env/bin/uwsgi --ini /home/www/beta/app.ini --chown-socket www-data:www-data
directory=/home/www/beta
autostart=true
autorestart=true
stdout_logfile=/home/logs/app_uwsgi.log
redirect=true
stopsignal=QUIT
nginx
server {
listen 80;
server_name beta.domain.io;
location / {
include uwsgi_params;
uwsgi_pass unix:/home/www/beta/app.sock;
uwsgi_read_timeout 1800;
}
}
When testing it I also get this error a lot
upstream prematurely closed connection while reading response header from upstream
If your changes to the code aren't reflected in your program's behavior, then your code either hasn't been reloaded, or it hasn't changed, or you misunderstand your changes. The most common time this happens to me is when .pyc files fail to get refreshed. You might try clearing all the pyc files from your project then launching it again, assuming you have any pyc files. This will also have the benefit of restarting the application.
Before doing anything I recommend using supervisor to restart your application. If your application is within supervisor, you can use the following relevant methods:
supervisorctl stop all
supervisorctl start all
supervisorctl restart all
Edit:
I forgot to mention that if you are viewing this in a browser then it is conceivable the browser is caching the older version. If this is the case, you would need to do a hard refresh.
I have Django and Celery set up. I am only using one node for the worker.
I want to use use it as an asynchronous queue and as a scheduler.
I can launch the task as follows, with the -B option and it will do both.
celery worker start 127.0.0.1 --app=myapp.tasks -B
However it is unclear how to do this on production when I want to daemonise the process. Do I need to set up both the init scripts?
I have tried adding the -B option to the init.d script, but it doesn't seem to have any effect. The documentation is not very clear.
Personally I use Supervisord, which has some nice options and configurability. There are example supervisord config files here
A couple of ways to achieve this:
http://celery.readthedocs.org/en/latest/tutorials/daemonizing.html
1. Celery distribution comes with a generic init scripts located in path-to-celery/celery-3.1.10/extra/generic-init.d/celeryd
this can be placed in /etc/init.d/celeryd-name and configured using a configuration file also present in the distribution which would look like the following
# Names of nodes to start (space-separated)
#CELERYD_NODES="my_application-node_1"
# Where to chdir at start. This could be the root of a virtualenv.
#CELERYD_CHDIR="/path/to/my_application"
# How to call celeryd-multi
#CELERYD_MULTI="$CELERYD_CHDIR/bin/celeryd-multi
# Extra arguments
#CELERYD_OPTS="--app=my_application.path.to.worker --time-limit=300 --concurrency=8 --loglevel=DEBUG"
# Create log/pid dirs, if they don't already exist
#CELERY_CREATE_DIRS=1
# %n will be replaced with the nodename
#CELERYD_LOG_FILE="/path/to/my_application/log/%n.log"
#CELERYD_PID_FILE="/var/run/celery/%n.pid"
# Workers run as an unprivileged user
#CELERYD_USER=my_user
#CELERYD_GROUP=my_group
You can add the following celerybeat elements for celery beat configuration to the file
# Where to chdir at start.
CELERYBEAT_CHDIR="/opt/Myproject/"
# Extra arguments to celerybeat
CELERYBEAT_OPTS="--schedule=/var/run/celery/celerybeat-schedule"
This config should be then saved in (atleast for centos) /etc/default/celeryd-config-name
Look at the init file for the exact location.
now you can run celery as a daemon by running commands
/etc/init.d/celeryd star/restart/stop
Using supervisord.
As mentioned in the other answer.
The superviosord configuration files are also in the distribution path-to-dist/celery-version/extra/supervisord
Configure using the files and use superviosrctl to run the service as a daemon
I am running a Python script on Apache 2.2 with mod wsgi.
Is it possible to run pdb.set_trace() in a python script using daemon mode in wsgi?
Edit
The reason I want to use daemon mode instead of embedded mode is to have the capability to reload code without having to restart the Apache server every time (which embedded mode requires). I would like to be able to use code reloading without restarting Apache everytime and still be able to use pdb...
I had the same need to be able to use the amazingly powerful pdb, dropping a pdb.set_trace() wherever I wanted to debug some part of the Python server code.
Yes, Apache spawns the WSGI application in a place where it is out of your control [1]. But I found a good compromise is to
maintain your Apache WSGIScriptAlias
and also give yourself the option of starting your Python server in a terminal as well (testing locally and not through Apache anymore in this case)
So if one uses WSGIScriptAlias somewhat like this...
pointing to your python WSGI script called webserver.py
<VirtualHost *:443>
ServerName myawesomeserver
DocumentRoot /opt/local/apache2/htdocs
<Directory /opt/local/apache2/htdocs>
[...]
</Directory>
WSGIScriptAlias /myapp /opt/local/apache2/my_wsgi_scripts/webserver.py/
<Directory /opt/local/apache2/my_wsgi_scripts/>
[...]
</Directory>
[...]
SSLEngine on
[...]
</VirtualHost>
And so your webserver.py can have a simple switch to go between being used by Apache and getting started up for debugging manually.
Keep a flag in your config file such as, in some settings.py:
WEBPY_WSGI_IS_ON = True
And webserver.py :
import web
import settings
urls = (
'/', 'excellentWebClass',
'/store', 'evenClassier',)
if settings.WEBPY_WSGI_IS_ON is True:
# MODE #1: Non-interactive web.py ; using WSGI
# So whenever true, the Web.py application here will talk wsgi.
application = web.application(urls, globals()).wsgifunc()
class excellentWebClass:
def GET(self, name):
# Drop a pdb wherever you want only if running manually from terminal.
pdb.set_trace()
try:
f = open (name)
return f.read()
except IOError:
print 'Error: No such file %s' % name
if __name__ == "__main__":
# MODE #2: Interactive web.py , for debugging.
# Here you call it directly.
app = web.application(urls, globals())
app.run()
So when you want to test out your webserver interactively, you just run it from a terminal,
$ python webserver.py 8080
starting web...
http://0.0.0.0:8080/
[1] Footnote: There are some really complex ways of getting Apache child processes under your control, but I think the above is much simpler if you just want to debug your Python server code. And if there are actually easy ways, then I would love to learn about those too.
I'm trying to deploy some Pyramid code to dotcloud. Unfortunately some paths are not mapped in the same way as in local paster deployment. When I'm running the development configuration with local server through paster serve ..., I can access static files configured in:
config.add_static_view('static', 'appname:static')
however on the dotcloud servers, when the scripts run via the following wsgi.py:
import os, sys
from paste.deploy import loadapp
current_dir = os.path.dirname(__file__)
application = loadapp('config:production.ini', relative_to=current_dir)
static content is searched for in a wrong directory. Instead of /home/dotcloud/current/static/pylons.css, it should look in /home/dotcloud/current/appname/static/pylons.css
Is there some part of wsgi configuration which can define the base directory? What am I missing? The application is run via nginx / uwsgi.
I tried to load config:../production.ini, relative_to=current_dir + '/appname' but that didn't change anything.
On DotCloud, URLs starting with /static are handled directly by nginx, not by uwsgi. That means that your code will never see those requests: they will be served straight away from the static/ subdirectory of your application.
One possible workaround is to setup a symlink from static to appname/static.
If you don't want to clutter your repository with such a symlink, you can use a postinstall script instead:
#!/bin/sh
# This creates the symlink required by DotCloud to serve static content from nginx
ln -s ~/current/appname/static ~/current/static
The symlink is sleek, but the postinstall scripts gives you the opportunity to drop in a comment in the file, to explain its purpose :-)
Future releases of DotCloud might offer a "naked configuration" toggle, where the nginx configuration won't include any special path handling, just in case you don't want them.
Meanwhile, if you want to see the nginx default configuration of your DotCloud service, you can just dotcloud ssh to your service, and inspect /etc/nginx/sites-enabled/default.