I am using supervisor to run celery with a django 1.8.8. setup. also using django-supervisor==0.3.4
supervisor==3.2.0
but when i restart all proecsses, i get
unix:///tmp/supervisor.sock refused connection
not able to restart any processes,
python manage.py supervisor --config-file=setting/staging_supervisor.conf --settings=setting.staging_settings restart all
supervisor config file
[supervisord]
logfile_maxbytes=10MB ; maximum size of logfile before rotation
logfile_backups=3 ; number of backed up logfiles
loglevel=warn ; info, debug, warn, trace
nodaemon=false ; run supervisord as a daemon
minfds=1024 ; number of startup file descriptors
minprocs=200 ; number of process descriptors
childlogdir=/logs/ ; where child log files will live
[program:celeryd_staging]
environment=PATH="{{ PROJECT_DIR }}/../../bin"
command={{ PYTHON }} {{ PROJECT_DIR }}/manage.py celeryd -l info -c 1 --logfile=/logs/staging-celeryd.log --settings=setting.staging_celery_settings
redirect_stderr=false
[program:celerybeat_staging]
environment=PATH="{{ PROJECT_DIR }}/../../bin"
command=/{{ PYTHON }} {{ PROJECT_DIR }}/manage.py celerybeat --loglevel=INFO --logfile=/logs/staging-celerybeat.log --settings=setting.staging_celery_settings
redirect_stderr=false
[group:tasks]
environment=PATH="{{ PROJECT_DIR}}/../../bin"
programs=celeryd_staging,celerybeat_staging
[program:autoreload]
exclude=true
[program:runserver]
exclude=true
Got the solution. The supervisor process was not reloaded as supervisord was in my virtual enve as i ma using django-supervisor package.
once i reloaded the supervisor process, the refused connection error went away.
make sure there isn't already another /tmp/supervisor.sock owned by some user other than you (like root or something).
if not a permissions problem, add this to your supervisord configuration:
[unix_http_server]
file = /tmp/supervisor.sock ;
chmod=0700 ;
[rpcinterface:supervisor]
supervisor.rpcinterface_factory =
supervisor.rpcinterface:make_main_rpcinterface
this might be helpful to you as well: https://github.com/Supervisor/supervisor/issues/480#issuecomment-145193475
Related
I have a python flask app that I have configured to run via Supevisord. The supervisor.conf file looks something like this -
[inet_http_server]
port=127.0.0.1:9001
[supervisord]
logfile=/path/to/log/supervisord.log
logfile_maxbytes=0 ; max main logfile bytes b4 rotation; default 50MB
logfile_backups=0 ; # of main logfile backups; 0 means none, default 10
loglevel=debug ; log level; default info; others: debug,warn,trace
pidfile=/path/to/supervisord.pid
nodaemon=false ; start in foreground if true; default false
directory=/path/to/project
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
; The supervisorctl section configures how supervisorctl will connect to
; supervisord. configure it match the settings in either the unix_http_server
; or inet_http_server section.
[supervisorctl]
serverurl=http://127.0.0.1:9001
history_file=/path/to/.sc_history ; use readline history if available
[program:my_server]
command=<command to run the program>
directory=/path/to/project
stdout_logfile=/path/to/log/%(program_name)s_stdout.log ; stdout log path, NONE for none; default AUTO
stdout_logfile_maxbytes=0 ;
stderr_logfile=/path/to/log/%(program_name)s_stderr.log ; stderr log path, NONE for none; default AUTO
stderr_logfile_backups=0 ; # of stderr logfile backups (0 means none, default 10)
The issue is, when I run the app via supervisord, it logs all outputs - info, debug, error, etc to the %(program_name)s_stderr.log log file and not the %(program_name)s_stdout.log file.
I log my info messages using the python's default logging library as -
logger.info("Some info msg")
What could be the reason for this behaviour?
While this question is tagged with flask and supervisord, the core issue is really how python's "logging" system works. By default, logger.info() messages are sent to stderr, not stdout, so flask and supervisor are doing what they're told (really flask hardly enters into it).
The python/logger portion has a good answer here: Logging, StreamHandler and standard streams
In short, you have to create a separate StreamHandler for stderr and stdout, and tell them what messages (INFO, DEBUG, ERROR, etc.) go to which ones.
There's working code in that accepted answer, so I won't repeat it here.
I am a newbie to Airflow and struggling with BashOperator. I want to access a shell script using bash operatory in my dag.py.
I checked:
How to run bash script file in Airflow
and
BashOperator doen't run bash file apache airflow
on how to access shell script through bash operator.
This is what I did:
cmd = "./myfirstdag/dag/lib/script.sh "
t_1 = BashOperator(
task_id='start',
bash_command=cmd
)
On running my recipe and checking in airflow I got the below error:
[2018-11-01 10:44:05,078] {bash_operator.py:77} INFO - /tmp/airflowtmp7VmPci/startUDmFWW: line 1: ./myfirstdag/dag/lib/script.sh: No such file or directory
[2018-11-01 10:44:05,082] {bash_operator.py:80} INFO - Command exited with return code 127
[2018-11-01 10:44:05,083] {models.py:1361} ERROR - Bash command failed
Not sure why this is happening. Any help would be appreciated.
Thanks !
EDIT NOTE: I assume that it's searching in some airflow tmp location rather than the path I provided. But how do I make it search for the right path.
Try this:
bash_operator = BashOperator(
task_id = 'task',
bash_command = '${AIRFLOW_HOME}/myfirstdag/dag/lib/script.sh '
dag = your_dag)
For those running a docker version.
I had this same issue, took me a while to realise the problem, the behaviour can be different with docker. When the DAG is run it moves it tmp file, if you do not have airflow on docker this is on the same machine. with my the docker version it moves it to another container to run, which of course when it is run would not have the script file on.
check the task logs carefully, you show see this happen before the task is run.
This may also depend on your airflow-docker setup.
Try the following. It needs to have a full file path to your bash file.
cmd = "/home/notebook/work/myfirstdag/dag/lib/script.sh "
t_1 = BashOperator(
task_id='start',
bash_command=cmd
)
Are you sure of the path you defined?
cmd = "./myfirstdag/dag/lib/script.sh "
With the heading . it means it is relative to the path where you execute your command.
Could you try this?
cmd = "find . -type f"
try running this:
path = "/home/notebook/work/myfirstdag/dag/lib/script.sh"
copy_script_cmd = 'cp ' + path + ' .;'
execute_cmd = './script.sh'
t_1 = BashOperator(
task_id='start',
bash_command=copy_script_cmd + execute_cmd
)
My environment is Debian Jesse on a embedded arm display.
I am using a bash script to automatically launch my app using the init.d method. It launches a second python script as a daemon that handles my application on startup and reboot.
Because it is run this way to the best of my knowledge this is now a daemon background process, and it disconnects STDOUT and STDIN from my python script.
The system and application is for a single purpose, so spamming the console with outputs from a background process is not only not a problem, but it is desired. With the outputs I can easily ssh or serial console into the display and see all the live debug outputs or exceptions.
I have looked into possible ways to force the process to the foreground, or redirect the outputs to STDOUT but have not found any definite answer when the script is run at startup.
My logging to a file is working perfect and otherwise the app works well in the state it is in. Currently when I need to debug I stop the application and run it manually to get all the outputs.
I have considered using sockets to redirect the outputs from the app, and then running a separate script that is listening will print to console... but that seems less than ideal and that a better solution might exist.
Is there methods to achieve this or should I just accept this.
EDIT 1 (additional details)
Because I am using multiple logs for multiple processes I have created a logger class. The stream handler uses the default which should be sys.stderr.
import logging
import logging.handlers
class LoggerSetup(object):
def __init__(self, logName, logFileNameAndPath, logSizeBytes, logFileCount):
log = logging.getLogger(logName)
log.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s - %(levelname)s - ' + logName + ' - %(message)s',datefmt="%m-%d %H:%M:%S")
# Add the log message handler to the logger
if(logFileCount > 0):
fileHandler = logging.handlers.RotatingFileHandler(logFileNameAndPath, maxBytes=logSizeBytes, backupCount=logFileCount)
log.addHandler(fileHandler)
fileHandler.setFormatter(formatter)
consoleHandler = logging.StreamHandler()
log.addHandler(consoleHandler)
consoleHandler.setFormatter(formatter)
log.info(logName + ' initialized')
For more reference here is the startup script launch on boot. It then runs my python run.py which handles the rest of the startup process.
#!/bin/sh
### BEGIN INIT INFO
# Provides: ArcimotoStart
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Startup script
# Description: startip script that points to run.py
### END INIT INFO
# Change the next 3 lines to suit where you install your script and what you want to call it
DAEMON=/app/run.py
DAEMON_NAME=app
# Add any command line options for your daemon here
DAEMON_OPTS=""
# This next line determines what user the script runs as.
# Root generally not recommended but necessary if you are using certain features in Python.
DAEMON_USER=root
# The process ID of the script when it runs is stored here:
PIDFILE=/var/run/$DAEMON_NAME.pid
. /lib/lsb/init-functions
do_start () {
log_daemon_msg "Starting system $DAEMON_NAME daemon"
start-stop-daemon --start --background --pidfile $PIDFILE --make-pidfile --user $DAEMON_USER --chuid $DAEMON_USER --startas $DAEMON -- $DAEMON_OPTS
log_end_msg $?
}
do_stop () {
log_daemon_msg "Stopping system $DAEMON_NAME daemon"
start-stop-daemon --stop --pidfile $PIDFILE --retry 10
log_end_msg $?
}
case "$1" in
start|stop)
do_${1}
;;
restart|reload|force-reload)
do_stop
do_start
;;
status)
status_of_proc "$DAEMON_NAME" "$DAEMON" && exit 0 || exit $?
;;
*)
echo "Usage: /etc/init.d/$DAEMON_NAME {start|stop|restart|status}"
exit 1
;;
esac
exit 0
What is the best way to initialize oslo.cfg in a celery project ?
I want to do this:
from phantom.openstack.common import cfg
import os
possible_topdir = os.path.normpath(os.path.join(os.path.abspath(__file__),
os.pardir, os.pardir))
conf_file = os.path.join(possible_topdir, 'etc/phantom', 'phantom.conf')
print 'config done 1'
config_files = None
if os.path.exists(conf_file):
config_files = [conf_file]
print 'config done 2'
cfg.CONF(project='phantom', default_config_files=[conf_file])
I have my config file where I store a set of URLs and user privileges that I would use to access different systems during different task executions.
I tried setting it up in my celery.py where I have this right now
from __future__ import absolute_import
from celery import Celery
celery = Celery(include=['phantom.tasks.celery_tasks'])
celery.config_from_object('bin.celeryconfig')
If I insert it into this block, it gives me back an error saying:
LM-SJN-00871893:Phantom uruddarraju$ celery -A phantom.celery.celery worker -l DEBUG
usage: celery [-h] [--version] [--config-file PATH] [--config-dir DIR]
celery: error: unrecognized arguments: -A phantom.celery.celery worker -l DEBUG
I am new to supervisor. Below is my supervisor config file.
# -*- conf -*-
[include]
files = *.supervisor
[supervisord]
pidfile = /var/run/supervisord.pid
[supervisorctl]
serverurl = unix://supervisord.sock
[unix_http_server]
file = /var/run/supervisord.sock
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[program:main]
process_name = main-%(process_num)s
command = /usr/bin/python /home/ubuntu/workspace/rtbopsConfig/rtbServers/rtbTornadoServer/tornadoServer.py --tport %(process_num)s
--port=%(process_num)s
--log_file_prefix=%(here)s/logs/%(program_name)s-%(process_num)s.log
numprocs = 4
numprocs_start = 8050
Now, I need to demonize the process where:
1) I can stop the parent proccess and all childs
2) Start
3) Reload all child process
4) If a child fails then automatically restarted.
5) Here is the command line to start
supervisord -c /home/ubuntu/workspace/rtbopsConfig/rtb_supervisor/tornadoSupervisor.conf
So...do I use runit? Upstart?
As of now I have kill -9 all parent and child prossess and if I do, the are not respawned.
Take a look at supervisorctl, it allows you to start/restart/auto-start/stop processes. If that doesn't fit your needs, you can also communicate with supervisor through XML-RPC.