I am running Django in a mod_wsgi environment on a shared host. I want to restrict the resources a request can use and ideally raise an Exception if it exceeds that amount. The WSGI options are as follows:
WSGIRestrictSignal off
WSGIRestrictStdout off
The VirtualHost has the following:
WSGIDaemonProcess django processes=10 threads=1 display-name=django-web
WSGIProcessGroup django
In the request I do the following:
signal.setrlimit(resource.RLIMIT_CPU, (10, -1))
signal.signal(signal.SIGXCPU, cpu_signal)
signal.signal(signal.SIGALRM, timeout_signal)
signal.alarm(1)
#Do some stuff
signal.alarm(0)
However when I run the request I get the error signal only works in main thread despite when I print out the number of active threads and the thread name I get there is one thread and the current thread name is MainThread so i don't understand why Python tries to set the signal it doesn't believe its running in the main thread.
I am running Python 2.7.2, Django 1.3.1 Apache 2.2.21 and WSGI 3.3
Send an email to mod_wsgi mailing list and will discuss there what is possible. In mod_wsgi 4.0 dev version there are some experimental mechanisms for killing process when CPU usage exceeds some value, but it is process wide. Doing it on a per request basis is hard because of multithreading being able to be used. Here on stackoverflow is not the place to be holding a discussion about it.
Related
I need to execute some slow tasks upon receiving a POST request.
My server runs under UWSGI which behaves in a weird manner
Localhost (python manage.py runserver):
when receiving request from browser, I do p = Process(target=workload); p.start(); return redirect(...). Browser immediately follows the redirect, and working process starts in the background.
UWSGI (2 workers):
Background process starts, but Browser doesn't get redirected. It waits until the child exit.
Note, I have added close-on-exec=true (as advised in documentation and in Running a subprocess in uwsgi application) parameter in UWSGI configuration, but that has no visible effect, application waits for child's exit
I imagine Python gets confused since the interpreter multiprocessing.Process() is defaulting to is the the uwsgi binary, not the regular Python interpreter.
Additionally, you may be using the fork spawn method (depending on OS) and forking the uWSGI worker isn't a great idea.
You will likely need to call multiprocessing.set_executable() and multiprocessing.set_spawn_method() when you're running under uWSGI with something like this in e.g. your Django settings.py:
import multiprocessing
import sys
try:
import uwsgi # magic module only available when under uwsgi
except ImportError:
uwsgi = None
if uwsgi:
multiprocessing.set_spawn_method('spawn')
multiprocessing.set_executable(os.path.join(sys.exec_prefix, 'python'))
However, you may want look into using e.g. uWSGI's spooler system or some other work queue/background task system such as Huey, rq, Minique, Celery, etc.
I'm stuck trying to debug an Apache process that keeps growing in memory size. I'm running Apache 2.4.6 with MPM Prefork on virtual Ubuntu host with 4GB of RAM, serving a Django app with mod_wsgi. The app is heavy with AJAX calls and Apache is getting between 300-1000 requests per minute. Here's what I'm seeing:
As soon as I restart Apache, the first child process (with lowest PID) will keep growing its memory usage, reaching over a gig in 6 or 7 minutes. All the other Apache process will keep memory usage between 10MB-50MB per process.
CPU usage for the troublesome process will fluctuate, sometimes dipping down very low, other times hovering at 20% or sometimes spiking higher.
The troublesome process will run indefinitely until I restart Apache.
I can see in my Django logs that the troublesome process is serving some requests to multiple remote IPs (I'm seeing reports of caught exceptions for URLs my app doesn't like, primarily).
Apache error logs will often (but not always) show "IOError: failed to write data" for the PID, sometimes across multiple IPs.
Apache access logs do not show any requests completed associated with this PID.
Running strace on the PID gets no results other than 'restart_syscall(<... resuming interrupted call ...>' even when I can see that PID mentioned in my app logs at a time when strace was running.
I've tried setting low values of MaxRequestsPerChild and MaxMemFree and neither has seemed to have any effect.
What could this be or how could I debug further? The fact that I see no output of strace makes me that my application has an infinite loop. If that were the case, how could I go about tracing the PID back to the code path it executed or the request that started the trouble?
Instead of restarting Apache, stop and start Apache. There is a known no-fix memory leak issue with Apache.
Also, consider using nginx and gunicorn—this setup is a lightweight, faster, and often recommended alternative to serving your django app, and static files.
References:
Performance
Memory Usage
Apache/Nginx Comparison
In apache server can we have the process level stickiness, without havinbg to used the daemon mode?
With mod_balancer module we can have stickiness at server level , i want all my request to go to the exactly same process on that server. Is is possible? or what can be the alternative?
You can balance apache mod_wsgi with some parameters. It can load a single process for interpreter and you can choose how many thread per process. Also you can have for a single virtual host resources you need.
WSGIDaemonProcess
I'm hoping someone's seen this -
I'm running django-compressor, taking advantage of the lessc setup to render/compress the less into CSS on the file. It works perfectly when invoked from the development server, but when run underneath apache+mod_wsgi it is consistently returning an error.
To debug this, I have run the exact command that the filter is invoking as the www-data user (which is defined as the wsgi user in the WSGIDaemonProcess directive) and verified that it works correctly, including permissions to read and write the files that it's manipulating.
I have also hacked on the django-compressor code in compressor/filters/base.py on that system, and it seems that ANY command attempting to get invoked is getting a returncode of -6 after the proc.communicate() invocation.
I'm hoping someone's seen this before - or that it rings some bell. It works fine on this machine outside of the apache+mod_wsgi process (i.e. running the process as a dev server) as well. I'm just not clear on what might be blocking the subprocess.Popen() invocations.
Are you using Python 2.7.2 by chance?
That version of Python introduced a bug which cause fork() in sub interpreters to fail:
http://bugs.python.org/issue13156
You will have to force WSGI application to run in main Python interpreter of the process by setting:
WSGIApplicationGroup %{GLOBAL}
If running multiple Django applications you need to ensure that only the one affected has this configuration directive applied to it else you would cause all Django applications to run in one interpreter which isn't possible due to how Django configuration works.
I have a variable in init of a module which get loaded from the database and takes about 15 seconds.
For django development server everything is working fine but looks like with apache2 and mod_wsgi the module is loaded with every request (taking 15 seconds).
Any idea about this behavior?
Update: I have enabled daemon mode in mod wsgi, looks like its not reloading the modules now! needs more testing and I will update.
You were likely ignoring the fact that in embedded mode of mod_wsgi or with mod_python, the application is multiprocess. Thus requests may go to different processes and you will see a delay the first time a process which hasn't been hit before is encountered. In mod_wsgi daemon mode the default has only a single process. That or as someone else mentioned you had MaxRequestsPerChild set to 1, which is a really bad idea.
I guess, you had a value of 1 for MaxClients / MaxRequestsPerChild and/or ThreadsPerChild in your Apache settings. So Apache had to startup Django for every mod_python call. That's why it took so long. If you have a wsgi-daemon, then a restart takes only place if you "touch" the wsgi script.