New to supervisor - how to make a daemon that works - python

I am new to supervisor. Below is my supervisor config file.
# -*- conf -*-
[include]
files = *.supervisor
[supervisord]
pidfile = /var/run/supervisord.pid
[supervisorctl]
serverurl = unix://supervisord.sock
[unix_http_server]
file = /var/run/supervisord.sock
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[program:main]
process_name = main-%(process_num)s
command = /usr/bin/python /home/ubuntu/workspace/rtbopsConfig/rtbServers/rtbTornadoServer/tornadoServer.py --tport %(process_num)s
--port=%(process_num)s
--log_file_prefix=%(here)s/logs/%(program_name)s-%(process_num)s.log
numprocs = 4
numprocs_start = 8050
Now, I need to demonize the process where:
1) I can stop the parent proccess and all childs
2) Start
3) Reload all child process
4) If a child fails then automatically restarted.
5) Here is the command line to start
supervisord -c /home/ubuntu/workspace/rtbopsConfig/rtb_supervisor/tornadoSupervisor.conf
So...do I use runit? Upstart?
As of now I have kill -9 all parent and child prossess and if I do, the are not respawned.

Take a look at supervisorctl, it allows you to start/restart/auto-start/stop processes. If that doesn't fit your needs, you can also communicate with supervisor through XML-RPC.

Related

Close pdf using subprocess in python

I try to close a pdf which I opened with the following process:
import subprocess
openpdffile = subprocess.Popen([file_path], shell=True)
I tried
openpdffile.kill()
But that keeps the pdf open in my pdf reader. Any suggestions?
Many thanks.
The reason is that subprocess.Popen creates a new process. So, what exactly is happening in your code is that you are creating a new process and then you are closing that new process. Instead, you need to find out the process id and kill it.
Note: The shell command work on the Windows system. To use them in a UNIX environment, you need to change the shell commands
import os
import subprocess
pid = subprocess.getoutput('tasklist | grep Notepad.exe').split()[1]
# we are taking [1] because this is the output produced by
# 'tasklist | grep Notepad.exe'
# Image Name PID Session Name Session Mem Usage
# ========== ==== ============= ======= =========
# Notepad.exe 10936 Console 17 16,584 K
os.system(f'taskkill /pid {pid}')
EDIT: To kill a specific process use the code below
import os
import subprocess
FILE_NAME = 'test.pdf' # Change this to your pdf file and it should work
proc = subprocess.getoutput('tasklist /fi "imagename eq Acrobat.exe" /fo csv /v /nh')
proc_list = proc.replace('"', '').split('\n')
for x in proc_list:
p = x.split(',')
if p[9].startswith(FILE_NAME):
pid = p[1]
os.system(f'taskkill /pid {pid}')
You can get pid when done subprocess things, and deterimine which to kill for your convenience.
here you can learn how to get pid of a subprocess: https://stackoverflow.com/a/7989942/13837927

django-supervisor connection refused

I am using supervisor to run celery with a django 1.8.8. setup. also using django-supervisor==0.3.4
supervisor==3.2.0
but when i restart all proecsses, i get
unix:///tmp/supervisor.sock refused connection
not able to restart any processes,
python manage.py supervisor --config-file=setting/staging_supervisor.conf --settings=setting.staging_settings restart all
supervisor config file
[supervisord]
logfile_maxbytes=10MB ; maximum size of logfile before rotation
logfile_backups=3 ; number of backed up logfiles
loglevel=warn ; info, debug, warn, trace
nodaemon=false ; run supervisord as a daemon
minfds=1024 ; number of startup file descriptors
minprocs=200 ; number of process descriptors
childlogdir=/logs/ ; where child log files will live
[program:celeryd_staging]
environment=PATH="{{ PROJECT_DIR }}/../../bin"
command={{ PYTHON }} {{ PROJECT_DIR }}/manage.py celeryd -l info -c 1 --logfile=/logs/staging-celeryd.log --settings=setting.staging_celery_settings
redirect_stderr=false
[program:celerybeat_staging]
environment=PATH="{{ PROJECT_DIR }}/../../bin"
command=/{{ PYTHON }} {{ PROJECT_DIR }}/manage.py celerybeat --loglevel=INFO --logfile=/logs/staging-celerybeat.log --settings=setting.staging_celery_settings
redirect_stderr=false
[group:tasks]
environment=PATH="{{ PROJECT_DIR}}/../../bin"
programs=celeryd_staging,celerybeat_staging
[program:autoreload]
exclude=true
[program:runserver]
exclude=true
Got the solution. The supervisor process was not reloaded as supervisord was in my virtual enve as i ma using django-supervisor package.
once i reloaded the supervisor process, the refused connection error went away.
make sure there isn't already another /tmp/supervisor.sock owned by some user other than you (like root or something).
if not a permissions problem, add this to your supervisord configuration:
[unix_http_server]
file = /tmp/supervisor.sock ;
chmod=0700 ;
[rpcinterface:supervisor]
supervisor.rpcinterface_factory =
supervisor.rpcinterface:make_main_rpcinterface
this might be helpful to you as well: https://github.com/Supervisor/supervisor/issues/480#issuecomment-145193475

Interactive, non-blocking subprocess.Popen script without using communicate or pexpect

A: Why does it block?
B: How may I massage this slightly so that it will run without blocking?
#!/usr/bin/env python
import subprocess as sp
import os
kwds = dict(
stdin=sp.PIPE,
stdout=sp.PIPE,
stderr=sp.PIPE,
cwd=os.path.abspath(os.getcwd()),
shell=True,
executable='/bin/bash',
bufsize=1,
universal_newlines=True,
)
cmd = '/bin/bash'
proc = sp.Popen(cmd, **kwds)
proc.stdin.write('ls -lashtr\n')
proc.stdin.flush()
# This blocks and never returns
proc.stdout.read()
I need this to run interactively.
This is a simplified example, but the reality is I have a long running process and I'd like to startup a shell script that can more or less run arbitrary code (because it's an installation script).
EDIT:
I would like to effectively take a .bash_history over several different logins, clean it up so it is a single script, and then execute the newly crafted shell script line-by-line within a shell stored within a Python script.
For example:
> ... ssh to remote aws system ...
> sudo su -
> apt-get install stuff
> su - $USERNAME
> ... create and enter a docker snapshot ...
> ... install packages, update configurations
> ... install new services, update service configurations ...
> ... drop out of snapshot ...
> ... commit the snapshot ...
> ... remove the snapshot ...
> ... update services ...
> ... restart services ...
> ... drop into a tmux within the new docker ...
This takes hours manually; it should be automated.
A: Why does it block?
It blocks because that's what .read() does: it reads all of the bytes until an end-of-file indication. Since the process never indicates end of file, the .read() never returns.
B: How may I massage this slightly (emphasis on slightly) so that it will run without blocking?
One thing to do is to cause the process to indicate end of file. A small change is to cause the subprocess to exit.
proc.stdin.write('ls -lashtr; exit\n')
This is an example form my another answer: https://stackoverflow.com/a/43012138/3555925, which did not use pexpect. You can see more detail in that answer.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
import sys
import select
import termios
import tty
import pty
from subprocess import Popen
command = 'bash'
# command = 'docker run -it --rm centos /bin/bash'.split()
# save original tty setting then set it to raw mode
old_tty = termios.tcgetattr(sys.stdin)
tty.setraw(sys.stdin.fileno())
# open pseudo-terminal to interact with subprocess
master_fd, slave_fd = pty.openpty()
# use os.setsid() make it run in a new process group, or bash job control will not be enabled
p = Popen(command,
preexec_fn=os.setsid,
stdin=slave_fd,
stdout=slave_fd,
stderr=slave_fd,
universal_newlines=True)
while p.poll() is None:
r, w, e = select.select([sys.stdin, master_fd], [], [])
if sys.stdin in r:
d = os.read(sys.stdin.fileno(), 10240)
os.write(master_fd, d)
elif master_fd in r:
o = os.read(master_fd, 10240)
if o:
os.write(sys.stdout.fileno(), o)
# restore tty settings back
termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_tty)

permanently change directory python scripting/what environment do python scripts run in?

I have a small git_cloner script that clones my companies projects correctly. In all my scripts, I use a func that hasn't given me problems yet:
def call_sp(
command, **arg_list):
p = subprocess.Popen(command, shell=True, **arg_list)
p.communicate()
At the end of this individual script, I use:
call_sp('cd {}'.format(branch_path))
This line does not change the terminal I ran my script in to the directory branch_path, in fact, even worse, it annoyingly asks me for my password! When removing the cd yadayada line above, my script no longer demands a password before completing. I wonder:
How are these python scripts actually running? Since the cd command had no permanent effect. I assume the script splits its own private subprocess separate from what the terminal is doing, then kills itself when the script finishes?
Based on how #1 works, how do I force my scripts to change the terminal directory permanently to save me time,
Why would merely running a change directory ask me for my password?
The full script is below, thank you,
Cody
#!/usr/bin/env python
import subprocess
import sys
import time
from os.path import expanduser
home_path = expanduser('~')
project_path = home_path + '/projects'
d = {'cwd': ''}
#calling from script:
# ./git_cloner.py projectname branchname
# to make a new branch say ./git_cloner.py project branchname
#interactive:
# just run ./git_cloner.py
if len(sys.argv) == 3:
project = sys.argv[1]
branch = sys.argv[2]
if len(sys.argv) < 3:
while True:
project = raw_input('Enter a project name (i.e., mainworkproject):\n')
if not project:
continue
break
while True:
branch = raw_input('Enter a branch name (i.e., dev):\n')
if not branch:
continue
break
def call_sp(command, **arg_list):
p = subprocess.Popen(command, shell=True, **arg_list)
p.communicate()
print "making new branch \"%s\" in project \"%s\"" % (branch, project)
this_project_path = '%s/%s' % (project_path, project)
branch_path = '%s/%s' % (this_project_path, branch)
d['cwd'] = project_path
call_sp('mkdir %s' % branch, **d)
d['cwd'] = branch_path
git_string = 'git clone ssh://git#git/home/git/repos/{}.git {}'.format(project, d['cwd'])
#see what you're doing to maybe need to cancel
print '\n'
print "{}\n\n".format(git_string)
call_sp(git_string)
time.sleep(30)
call_sp('git checkout dev', **d)
time.sleep(2)
call_sp('git checkout -b {}'.format(branch), **d)
time.sleep(5)
#...then I make some symlinks, which work
call_sp('cp {}/dev/settings.py {}/settings.py'.format(project_path, branch_path))
print 'dont forget "git push -u origin {}"'.format(branch)
call_sp('cd {}'.format(branch_path))
You cannot use Popen to change the current directory of the running script. Popen will create a new process with its own environment. If you do a cd within that, it will change directory for that running process, which will then immediately exit.
If you want to change the directory for the script you could use os.chdir(path), then all subsequent commands in the script will be run from that new path.
Child processes cannot alter the environment of their parents though, so you can't have a process you create change the environment of the caller.

How to get the environment variables of a subprocess after it finishes running?

I'm looking for a way to do this, so that I can pass it to the environment of another subprocess.
Here's a simple function which runs a command in a subprocess, then extracts its environment into the current process.
It's based on Fnord's version, without the tempfile, and with a marker line to distinguish the SET command from any output of the process itself. It's not bulletproof, but it work for my purposes.
def setenv(cmd):
cmd = cmd + ' && echo ~~~~START_ENVIRONMENT_HERE~~~~ && set'
env = (subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
.stdout
.read()
.decode('utf-8')
.splitlines())
record = False
for e in env:
if record:
e = e.strip().split('=')
os.environ[e[0]] = e[1]
elif e.strip() == '~~~~START_ENVIRONMENT_HERE~~~~':
record = True
Unfortunately the child's environment will evaporate as soon as it exits, and even if you use the /proc filesystem on Unix special file /proc/[pid]/environ it won't reflect changes made by the child process.
Even if the above did work, you'd have a race condition: the parent would need to determine the "right time" to read the environment, ideally right after the child modified it. To do that the parent would need to coordinate with the child, and as long as you're coordinating you might as well be communicating explicitly.
You'd need to pass state between parent and child over a socket, pipe, shared memory, etc. The multiprocessing module can make this a bit easier, letting you pass data from child to parent via queues or pipes.
Updated Here's a quick sketch of using the multiprocessing module to let a parent process share values with child processes, and for child processes to communicate with one another across a queue. It makes it pretty simple:
import os
from multiprocessing import Process, Manager, Queue
def worker1(d, q):
# receive value from worker2
msg = q.get()
d['value'] += 1
d['worker1'] = os.getpid(), msg
def worker2(d, q):
# send value to worker1
q.put('hi from worker2')
d['value'] += 1
d['worker2'] = os.getpid()
if __name__ == '__main__':
mgr = Manager()
d = mgr.dict()
q = Queue()
d['value'] = 1
p1 = Process(target=worker1, args=(d,q))
p1.start()
p2 = Process(target=worker2, args=(d,q))
p2.start()
p1.join()
p2.join()
print d
Result:
{'worker1': (47395, 'hi from worker2'), 'worker2': 47396, 'value': 3}
In Windows you could use the SET command to get what you want, like this:
import os, tempfile, subprocess
def set_env(bat_file):
''' Set current os.environ variables by sourcing an existing .bat file
Note that because of a bug with stdout=subprocess.PIPE in my environment
i use '>' to pipe out the output of 'set' into a text file, instead of
of using stdout. So you could simplify this a bit...
'''
# Run the command and pipe to a tempfile
temp = tempfile.mktemp()
cmd = '%s && set > %s'%(bat_file,temp)
login = subprocess.Popen(cmd, shell=True)
state = login.wait()
# Parse the output
data = []
if os.path.isfile(temp):
with open(temp,'r') as file:
data = file.readlines()
os.remove(temp)
# Every line will set an env variable
for env in data:
env = env.strip().split('=')
os.environ[env[0]] = env[1]
# Make an environment variable
os.environ['SOME_AWESOME_VARIABLE']='nothing'
# Run a batch file which you expect, amongst other things, to change this env variable
set_env('C:/do_something_awesome.bat')
# Lets see what happened
os.environ['SOME_AWESOME_VARIABLE']
// RESULT: 'AWESOME'
So now if you can use this to read .bat files and then use the environment variables it generates as you please, modify/add to them, pass on to a new process... etc...
Can you print them out in the first subprocess and deal with that string in python?
Wade's answer was nearly perfect. Apparently I had a "'" in my environment with no second element - that was breaking env[0] = env[1]
def setenv(cmd):
cmd = cmd + ' && echo ~~~~START_ENVIRONMENT_HERE~~~~ && set'
env = (subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
.stdout
.read()
.decode('utf-8')
.splitlines())
record = False
for e in env:
if record:
e = e.strip().split('=')
if len(e) > 1:
os.environ[e[0]] = e[1]
elif e.strip() == '~~~~START_ENVIRONMENT_HERE~~~~':
record = True

Categories