I have a python flask app that I have configured to run via Supevisord. The supervisor.conf file looks something like this -
[inet_http_server]
port=127.0.0.1:9001
[supervisord]
logfile=/path/to/log/supervisord.log
logfile_maxbytes=0 ; max main logfile bytes b4 rotation; default 50MB
logfile_backups=0 ; # of main logfile backups; 0 means none, default 10
loglevel=debug ; log level; default info; others: debug,warn,trace
pidfile=/path/to/supervisord.pid
nodaemon=false ; start in foreground if true; default false
directory=/path/to/project
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
; The supervisorctl section configures how supervisorctl will connect to
; supervisord. configure it match the settings in either the unix_http_server
; or inet_http_server section.
[supervisorctl]
serverurl=http://127.0.0.1:9001
history_file=/path/to/.sc_history ; use readline history if available
[program:my_server]
command=<command to run the program>
directory=/path/to/project
stdout_logfile=/path/to/log/%(program_name)s_stdout.log ; stdout log path, NONE for none; default AUTO
stdout_logfile_maxbytes=0 ;
stderr_logfile=/path/to/log/%(program_name)s_stderr.log ; stderr log path, NONE for none; default AUTO
stderr_logfile_backups=0 ; # of stderr logfile backups (0 means none, default 10)
The issue is, when I run the app via supervisord, it logs all outputs - info, debug, error, etc to the %(program_name)s_stderr.log log file and not the %(program_name)s_stdout.log file.
I log my info messages using the python's default logging library as -
logger.info("Some info msg")
What could be the reason for this behaviour?
While this question is tagged with flask and supervisord, the core issue is really how python's "logging" system works. By default, logger.info() messages are sent to stderr, not stdout, so flask and supervisor are doing what they're told (really flask hardly enters into it).
The python/logger portion has a good answer here: Logging, StreamHandler and standard streams
In short, you have to create a separate StreamHandler for stderr and stdout, and tell them what messages (INFO, DEBUG, ERROR, etc.) go to which ones.
There's working code in that accepted answer, so I won't repeat it here.
Related
OS:macOS Sierra10.12.3
I installed supervisor by 'brew install supervisor' and tried to use it to manage my python program:
import get_weibo
import time
while(True):
get_weibo.get_all()
time.sleep(60*60*6)
get_weibo works in console. It's not the problem
and the configuration of supervisor is here
[supervisord]
logfile=/usr/local/var/log/supervisord.log ; (main log file;default $CWD/supervisord.log)
logfile_maxbytes=50MB ; (max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10 ; (num of main logfile rotation backups;default 10)
loglevel=info ; (log level;default info; others: debug,warn,trace)
pidfile=/usr/local/var/run/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=false ; (start in foreground if true;default false)
minfds=1024 ; (min. avail startup file descriptors;default 1024)
minprocs=200 ; (min. avail process descriptors;default 200)
umask=022 ; (process file creation umask;default 022)
user=root ; (default is current user, required if root)
identifier=supervisor ; (supervisord identifier, default is 'supervisor')
directory=/tmp ; (default is not to cd during start)
nocleanup=true ; (don't clean up tempfiles at start;default false)
childlogdir=/tmp ; ('AUTO' child log dir, default $TEMP)
environment=KEY="value" ; (key value pairs to add to environment)
strip_ansi=false ; (strip ansi escape codes in logs; def. false)
[program:weibopics]
command=python /Users/HirosueRyouko/PycharmProjects/Supervisor_Py/keep_weibo_pics.py ; the program (relative uses PATH, can take args)
process_name=%(program_name)s ; process_name expr (default %(program_name)s)
numprocs=1 ; number of processes copies to start (def 1) directory to cwd to before exec (def no cwd)
umask=022 ; umask for process (default None)
priority=600 ; the relative start priority (default 999)
autostart=true ; start at supervisord start (default: true)
startsecs=1 ; # of secs prog must stay up to be running (def. 1)
startretries=3 ; max # of serial start failures when starting (default 3)
autorestart=true ; when to restart if exited after running (def: unexpected)
exitcodes=2 ; 'expected' exit codes used with autorestart (default 0,2)
stopsignal=QUIT ; signal used to kill process (default TERM)
stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)
stopasgroup=false ; send stop signal to the UNIX process group (default false)
killasgroup=false ; SIGKILL the UNIX process group (def false)
user=nobody ; setuid to this UNIX account to run the program
redirect_stderr=true ; redirect proc stderr to stdout (default false)
stdout_logfile=/Users/HirosueRyouko/PycharmProjects/Supervisor_Py/weibo_pics_output.log ; stdout log path, NONE for none; default AUTO
stdout_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=10 ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false ; emit events on stdout writes (default false)
stderr_logfile=/Users/HirosueRyouko/PycharmProjects/Supervisor_Py/weibo_pics_error.log ; stderr log path, NONE for none; default AUTO
stderr_logfile_maxbytes=1MB ; max # logfile bytes b4 rotation (default 50MB)
stderr_logfile_backups=10 ; # of stderr logfile backups (default 10)
stderr_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)
stderr_events_enabled=false ; emit events on stderr writes (default false)
environment=A="1",B="2" ; process environment additions (def no adds)
serverurl=AUTO ; override serverurl computation (childutils)
Here is the input in command line.
guangmoliangzideMacBook:~ HirosueRyouko$ supervisord -c /etc/supervisord.conf
Then i got this in Supervisor Status:
Supervisor Status
The tail-f:
supervisor: couldn't setuid to 4294967294: Can't drop privilege as nonroot user
supervisor: child process was not spawned
How can I work this out?
You could try to use sudo on your command... Read the error... Can't drop privileges as non-root user
Or read here about running without root
Says to modify how the user= variable is set. Like comment it out entirely
I am using supervisor to run celery with a django 1.8.8. setup. also using django-supervisor==0.3.4
supervisor==3.2.0
but when i restart all proecsses, i get
unix:///tmp/supervisor.sock refused connection
not able to restart any processes,
python manage.py supervisor --config-file=setting/staging_supervisor.conf --settings=setting.staging_settings restart all
supervisor config file
[supervisord]
logfile_maxbytes=10MB ; maximum size of logfile before rotation
logfile_backups=3 ; number of backed up logfiles
loglevel=warn ; info, debug, warn, trace
nodaemon=false ; run supervisord as a daemon
minfds=1024 ; number of startup file descriptors
minprocs=200 ; number of process descriptors
childlogdir=/logs/ ; where child log files will live
[program:celeryd_staging]
environment=PATH="{{ PROJECT_DIR }}/../../bin"
command={{ PYTHON }} {{ PROJECT_DIR }}/manage.py celeryd -l info -c 1 --logfile=/logs/staging-celeryd.log --settings=setting.staging_celery_settings
redirect_stderr=false
[program:celerybeat_staging]
environment=PATH="{{ PROJECT_DIR }}/../../bin"
command=/{{ PYTHON }} {{ PROJECT_DIR }}/manage.py celerybeat --loglevel=INFO --logfile=/logs/staging-celerybeat.log --settings=setting.staging_celery_settings
redirect_stderr=false
[group:tasks]
environment=PATH="{{ PROJECT_DIR}}/../../bin"
programs=celeryd_staging,celerybeat_staging
[program:autoreload]
exclude=true
[program:runserver]
exclude=true
Got the solution. The supervisor process was not reloaded as supervisord was in my virtual enve as i ma using django-supervisor package.
once i reloaded the supervisor process, the refused connection error went away.
make sure there isn't already another /tmp/supervisor.sock owned by some user other than you (like root or something).
if not a permissions problem, add this to your supervisord configuration:
[unix_http_server]
file = /tmp/supervisor.sock ;
chmod=0700 ;
[rpcinterface:supervisor]
supervisor.rpcinterface_factory =
supervisor.rpcinterface:make_main_rpcinterface
this might be helpful to you as well: https://github.com/Supervisor/supervisor/issues/480#issuecomment-145193475
My environment is Debian Jesse on a embedded arm display.
I am using a bash script to automatically launch my app using the init.d method. It launches a second python script as a daemon that handles my application on startup and reboot.
Because it is run this way to the best of my knowledge this is now a daemon background process, and it disconnects STDOUT and STDIN from my python script.
The system and application is for a single purpose, so spamming the console with outputs from a background process is not only not a problem, but it is desired. With the outputs I can easily ssh or serial console into the display and see all the live debug outputs or exceptions.
I have looked into possible ways to force the process to the foreground, or redirect the outputs to STDOUT but have not found any definite answer when the script is run at startup.
My logging to a file is working perfect and otherwise the app works well in the state it is in. Currently when I need to debug I stop the application and run it manually to get all the outputs.
I have considered using sockets to redirect the outputs from the app, and then running a separate script that is listening will print to console... but that seems less than ideal and that a better solution might exist.
Is there methods to achieve this or should I just accept this.
EDIT 1 (additional details)
Because I am using multiple logs for multiple processes I have created a logger class. The stream handler uses the default which should be sys.stderr.
import logging
import logging.handlers
class LoggerSetup(object):
def __init__(self, logName, logFileNameAndPath, logSizeBytes, logFileCount):
log = logging.getLogger(logName)
log.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s - %(levelname)s - ' + logName + ' - %(message)s',datefmt="%m-%d %H:%M:%S")
# Add the log message handler to the logger
if(logFileCount > 0):
fileHandler = logging.handlers.RotatingFileHandler(logFileNameAndPath, maxBytes=logSizeBytes, backupCount=logFileCount)
log.addHandler(fileHandler)
fileHandler.setFormatter(formatter)
consoleHandler = logging.StreamHandler()
log.addHandler(consoleHandler)
consoleHandler.setFormatter(formatter)
log.info(logName + ' initialized')
For more reference here is the startup script launch on boot. It then runs my python run.py which handles the rest of the startup process.
#!/bin/sh
### BEGIN INIT INFO
# Provides: ArcimotoStart
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Startup script
# Description: startip script that points to run.py
### END INIT INFO
# Change the next 3 lines to suit where you install your script and what you want to call it
DAEMON=/app/run.py
DAEMON_NAME=app
# Add any command line options for your daemon here
DAEMON_OPTS=""
# This next line determines what user the script runs as.
# Root generally not recommended but necessary if you are using certain features in Python.
DAEMON_USER=root
# The process ID of the script when it runs is stored here:
PIDFILE=/var/run/$DAEMON_NAME.pid
. /lib/lsb/init-functions
do_start () {
log_daemon_msg "Starting system $DAEMON_NAME daemon"
start-stop-daemon --start --background --pidfile $PIDFILE --make-pidfile --user $DAEMON_USER --chuid $DAEMON_USER --startas $DAEMON -- $DAEMON_OPTS
log_end_msg $?
}
do_stop () {
log_daemon_msg "Stopping system $DAEMON_NAME daemon"
start-stop-daemon --stop --pidfile $PIDFILE --retry 10
log_end_msg $?
}
case "$1" in
start|stop)
do_${1}
;;
restart|reload|force-reload)
do_stop
do_start
;;
status)
status_of_proc "$DAEMON_NAME" "$DAEMON" && exit 0 || exit $?
;;
*)
echo "Usage: /etc/init.d/$DAEMON_NAME {start|stop|restart|status}"
exit 1
;;
esac
exit 0
I've just tried to connect my Python/Django app with Vyatta server using Paramiko for SSHing. Unfortunately, when I try to run show interfaces it throws "Invalid command". However, if try to SSH manually from that server, it works fine. I tried also '/vbash -c "show interfaces"' - the same result.
ssh = paramiko.SSHClient()
ssh.connect('10.0.0.1','vyatta','vyatta')
stdin, stdout, stderr = ssh.exec_command('show interfaces')
# or stdin, stdout, stderr = ssh.exec_command('vbash -c "show interfaces"')
print '-'.join(stdout)
print '-'.join(stderr)
As mentioned earlier you can use vyatta-cfg-cmd-wrapper and set any configuration node:
<import stuff>
command = """
/opt/vyatta/sbin/vyatta-cfg-cmd-wrapper begin
/opt/vyatta/sbin/vyatta-cfg-cmd-wrapper set system host-name newhostname
/opt/vyatta/sbin/vyatta-cfg-cmd-wrapper commit
/opt/vyatta/sbin/vyatta-cfg-cmd-wrapper save
"""
sshobj = paramiko.SSHClient()
sshobj.set_missing_host_key_policy(paramiko.AutoAddPolicy())
sshobj.connect(IP,username=login,password=sshpass)
stdin,stdout,stderr=sshobj.exec_command(command)
print ''.join(stdout)
sshobj.close()
And the reult as follow:
user#hostname$ python vyatta.py
Saving configuration to '/config/config.boot'...
Vyatta commands are done by templates in vbash. There are a bunch of environment variables that need to be set in order for templates to work. You have to either use a remote envrionment that sources .profilerc or there is an undocumented script vyatta-cfg-command wrapper to setup more complex state necessary to commit changes.
In my case, I solved the problem by using this command:
vbash -c -i "restart vpn"
I would like to be able to allow a user to view the output of a long-running GCI script as it is generated rather than after the script is complete. However even when I explicitly flush STDOUT the server seems to wait for the script to complete before sending the response to the client. This is on a Linux server running Apache 2.2.9.
Example python CGI:
#!/usr/bin/python
import time
import sys
print "Content-type: text/plain"
print
for i in range(1, 10):
print i
sys.stdout.flush()
time.sleep(1)
print "Done."
Similar example in perl:
#!/usr/bin/perl
print "Content-type: text/plain\n\n";
for ($i = 1; $i <= 10 ; $i++) {
print "$i\n";
sleep(1);
}
print "Done.";
This link says as of Apache 1.3 CGI output should be unbuffered (but this might apply only to Apache 1.x): http://httpd.apache.org/docs/1.3/misc/FAQ-F.html#nph-scripts
Any ideas?
Randal Schwartz's article Watching long processes through CGI explains a different (and IMHO, better) way of watching a long running process.
Flushing STDOUT can help. For example, the following Perl program should work as intended:
#!/usr/bin/perl
use strict;
use warnings;
local $| = 1;
print "Content-type: text/plain\n\n";
for ( my $i = 1 ; $i <= 10 ; $i++ ) {
print "$i\n";
sleep(1);
}
print "Done.";
You must put your push script into a special directory wich contain a special .htaccess
with this environnment specs:
Options +ExecCGI
AddHandler cgi-script .cgi .sh .pl .py
SetEnvIfNoCase Content-Type \
"^multipart/form-data;" "MODSEC_NOPOSTBUFFERING=Do not buffer file uploads"
SetEnv no-gzip dont-vary
According to CGI::Push,
Apache web server from version 1.3b2
on does not need server push scripts
installed as NPH scripts: the -nph
parameter to do_push() may be set to a
false value to disable the extra
headers needed by an NPH script.
You just have to find do_push equivalent in python.
Edit: Take a look at CherryPy: Streaming the response body.
When you set the config entry "response.stream" to True (and use "yield") CherryPy manages the conversation between the HTTP server and your code like this:
(source: cherrypy.org)