Gunicorn: do not empty specified log files when restarting - python

I have created a Gunicorn project, with accesslog and errorlog being specified in a config file, and then the server being started with only a -c flag to specify this config file.
The problem is, each time I restart the same Gunicorn process (via pkill -F <pidfile, also specified in config>), the files specified in these configs are emptied. I've got an info that it's because of the mode in which Gunicorn opens these files being "write", rather than "append", but haven't found anything about in the official settings.
How can I fix it? It's important because I tend to forget manually backing up these logs and had no capacity for automating it so far.

This was my mistake, and mostly unrelated to Gunicorn itself: I had a script that would have created any file not existing but still required, as it could have crashed the app server:
for file in [pidfile, accesslog, errorlog]:
os.makedirs(os.path.dirname(file), exist_ok=True)
f = open(file, "w")
File mode w always emptied the files. Making a rule that uses it for pidfile only, and a for the logfiles solved the problem.

Related

Why do I have no logs? empty web.stdout.logs?

So I have an AWS EB environment with and application deployed.
I can't view the applications log output (web.stdout.logs is empty)
You can try by adding the following to your python.config, or by creating new config file, e.g. mylogs.config:
files:
"/opt/elasticbeanstalk/config/private/logtasks/bundle/applogs.conf" :
mode: "000755"
owner: root
group: root
content: |
/opt/python/log/*.log
The /opt/python/log/*.log should be adjusted to your application.
The problem was not that I couldn't see the output, it was always in the /var/log/web.stdout.log file however when I was zipping the file to upload it to the EB environment I was zipping it using the file manager. This caused an issue on upload as the Procfile was being placed beside the application rather than inside of it.
The effect looked like there were no log files, but really the application was just never getting passed to the Procfile.
The solution was
cd path/to/application
zip -r my_application_code.zip .
Now when I upload the zip file to the EB console the Procfile is generated correctly and the applications log files are found in web.stdout.log as normal.
Thanks for your help.

uwsgi works from console but not ini

I am trying to setup graphite with nginx. Because of this I need to run it using a configuration or ini file in /etc/uwsgi, but I am unable to get the application to start correctly.
Using, the command,
uwsgi --http :9090 --wsgi-file /opt/graphite/conf/graphite.py
Graphite starts and runs fine, I can navigate it and look at stats.
I proceeded to create an ini file, with the contents:
[uwsgi]
processes = 2
socket = localhost:8081
gid = nginx
uid = nginx
chdir = /opt/graphite/conf
uswsgi-file = graphite.py
running the ini file I see:
mapped 145536 bytes (142 KB) for 2 cores
*** Operational MODE: preforking ***
*** no app loaded. going in full dynamic mode ***
*** uWSGI is running in multiple interpreter mode ***
I can only guess it something is misconfigured here in the ini file but I am not seeing what it is.
Any help is appreciated!
There are some differences between your command line and ini file:
you're using socket instead of http inside ini. That means, uWSGI server will talk using uwsgi protocol instead of http. If you're using uwsgi_pass in nginx and trying to access your website from browser through that nginx, it's fine. But if you're trying to access uwsgi directly from browser, bypassing nginx, you won't succeed because browsers doesn't talk uwsgi.
You've put uswgi-file instead of wsgi-file into your config. That won't work at all and uwsgi won't be able to find your wsgi file.
And if you're chdiring into directory with your wsgi file, it is better to use:
module = wsgi
instead of wsgi-file.

nginx+uwsgi+django, there seems to be some strange cache in uwsgi, help me

This is uwsgi config:
[uwsgi]
uid = 500
listen=200
master = true
profiler = true
processes = 8
logdate = true
socket = 127.0.0.1:8000
module = www.wsgi
pythonpath = /root/www/
pythonpath = /root/www/www
pidfile = /root/www/www.pid
daemonize = /root/www/www.log
enable-threads = true
memory-report = true
limit-as = 6048
This is Nginx config:
server{
listen 80;
server_name 119.254.35.221;
location / {
uwsgi_pass 127.0.0.1:8000;
include uwsgi_params;
}
}
The django works ok, but modifed pages can't be seen unless i restart uwsgi.(what's more, as i config 8 worker process, i can see the modified page when i press on ctrl+f5 for a while, seems that only certain worker can read and response the modified page, but others just shows the old one, who caches the old page? i didn't config anything about cache)
I didn't config the django, and it works well with "python manager runserver ...", but havfe this problem when working with nginx+uwsgi.
(the nginx and uwsgi are both new installation, i'm sure nothing else is configed here..)
uwsgi does not reload your code automatically, only development server does
runserver is for debug purposes, uwsgi and nginx for production
in production you can restart uwsgi by service uwsgi restart or via init.d script
there is even better way to reload uwsg by using touch-reload
usually there is no need to cleanup .pyc files, it happens only when timestamps on files are wrong (I've seen it only couple times at my entire carieer)
This is normal behavior. uwsgi will not re-read your code unless you restart it (it does not work like runserver when you have DEBUG=True).
If after you have updated your code, restarted uwsgi, cleared your browser cache and it still doesn't reflect your changes, then you should delete *.pyc files from your source directory.
I typically use this:
find . -name "*.pyc" -exec rm {} \;
Roughly speaking, .pyc is the "compiled" version of your code. Python will load this optimized version if it doesn't detect a change in the source. If you delete these files; then it will re-read your source files.

What determines the applicative log file location in Apache/Django?

(Cross posted to Server Fault some time ago)
I have a django app running on Apache/ubuntu, and I have evidently misconfigured it.
When I start apache, I'm getting this error in the apache log:
...
IOError: [Errno 2] No such file or directory: '/home/osqa/sites/log/django.osqa.log'
Now, my site is supposed to be running in 'home/osqa/sites/foobar/'. Why is django/apache looking for a log file in a folder above that folder? Where is this configured? How to resolve/analyze?
The following lines in your httpd.conf file is what is causing your problem:
CustomLog ${APACHE_LOG_DIR}/beta-meta-d3c.access.log common
ErrorLog ${APACHE_LOG_DIR}/beta-meta-d3c.error.log
This logging is set up for apache as a whole, not just your application which is set up in a subdirectory within apache.
${APACHE_LOG_DIR} should evaluate to /home/osqa/sites/log/. Fully expanded with your log names it will be:
/home/osqa/sites/log/beta-meta-d3c.access.log
Which is exactly what it is telling you. Either create that directory and make it writeable, or change your httpd conf to append the name of your application to the logging path directive. I'd probably not change the path though, as other applications might want to log also, but not to your directory.
My hunch is that it's something in your apache2 config. Go to sites-available/your-site.com.conf, and look in its file. Perhaps you'll find that it is logging things?

mod_wsgi force reload modules

Is there a way to have mod_wsgi reload all modules (maybe in a particular directory) on each load?
While working on the code, it's very annoying to restart apache every time something is changed. The only option I've found so far is to put modname = reload(modname) below every import.. but that's also really annoying since it means I'm going to have to go through and remove them all at a later date..
The link:
http://code.google.com/p/modwsgi/wiki/ReloadingSourceCode
should be emphasised. It also should be emphaised that on UNIX systems daemon mode of mod_wsgi must be used and you must implement the code monitor described in the documentation. The whole process reloading option will not work for embedded mode of mod_wsgi on UNIX systems. Even though on Windows systems the only option is embedded mode, it is possible through a bit of trickery to do the same thing by triggering an internal restart of Apache from the code monitoring script. This is also described in the documentation.
The following solution is aimed at Linux users only, and has been tested to work under Ubuntu Server 12.04.1
To run WSGI under daemon mode, you need to specify WSGIProcessGroup and WSGIDaemonProcess directives in your Apache configuration file, for example
WSGIProcessGroup my_wsgi_process
WSGIDaemonProcess my_wsgi_process threads=15
More details are available in http://code.google.com/p/modwsgi/wiki/ConfigurationDirectives
An added bonus is extra stability if you are running multiple WSGI sites under the same server, potentially with VirtualHost directives. Without using daemon processes, I found two Django sites conflicting with each other and turning up 500 Internal Server Error's alternatively.
At this point, your server is in fact already monitoring your WSGI site for changes, though it only watches the file you specified using WSGIScriptAlias, like
WSGIScriptAlias / /var/www/my_django_site/my_django_site/wsgi.py
This means that you can force the WSGI daemon process to reload by changing the WSGI script. Of course, you don't have to change its contents, but rather,
$ touch /var/www/my_django_site/my_django_site/wsgi.py
would do the trick.
By utilizing the method above, you can automatically reload a WSGI site in production environment without restarting/reloading the entire Apache server, or modifying your WSGI script to do production-unsafe code change monitoring.
This is particularly useful when you have automated deploy scripts, and don't want to restart the Apache server on deployment.
During development, you may use a filesystem changes watcher to invoke touch wsgi.py every time a module under your site changes, for example, pywatch
The mod_wsgi documentation on code reloading is your best bet for an answer.
I know it's an old thread but this might help someone. To kill your process when any file in a certain directory is written to, you can use something like this:
monitor.py
import os, sys, time, signal, threading, atexit
import inotify.adapters
def _monitor(path):
i = inotify.adapters.InotifyTree(path)
print "monitoring", path
while 1:
for event in i.event_gen():
if event is not None:
(header, type_names, watch_path, filename) = event
if 'IN_CLOSE_WRITE' in type_names:
prefix = 'monitor (pid=%d):' % os.getpid()
print "%s %s/%s changed," % (prefix, path, filename), 'restarting!'
os.kill(os.getpid(), signal.SIGKILL)
def start(path):
t = threading.Thread(target = _monitor, args = (path,))
t.setDaemon(True)
t.start()
print 'Started change monitor. (pid=%d)' % os.getpid()
In your server startup, call it like:
server.py
import monitor
monitor.start(<directory which contains your wsgi files>)
if your main server file is in the directory which contains all your files, you can go like:
monitor.start(os.path.dirname(__file__))
Adding other folders is left as an exercise...
You'll need to 'pip install inotify'
This was cribbed from the code here: https://code.google.com/archive/p/modwsgi/wikis/ReloadingSourceCode.wiki#Restarting_Daemon_Processes
This is an answer to my duplicate question here: WSGI process reload modules

Categories