Run python script with supervisord - python

I have a simple python script (discord bot) and it works well when I run it with command python3 discord_bot.py & or sh start_bot.sh.
But how can I run it with supervisord?
Update:
I have installed supervisord. But when I try to run process, I have error:
exit status 0; not expected
My supervisord config:
[program:AFI]
command=/home/maksymov/www/Bots/discord_bots/afi/start_bot.sh
autostart=true
autorestart=true
stderr_logfile=/var/log/afi.err.log
stdout_logfile=/var/log/afi.out.log
user=www-data

Probably you need to use one of the "supervisors". Like system.d or ramona
The first one is more general. The second is more "python-specific"

I guess your program tries to run as a daemon. I pasted the most relevant part from documentation:
Supervisord subprocess
Programs meant to be run under supervisor should not daemonize themselves. Instead, they should run in the foreground. They should not detach from the terminal from which they are started.
The easiest way to tell if a program will run in the foreground is to run the command that invokes the program from a shell prompt. If it gives you control of the terminal back, but continues running, it’s daemonizing itself and that will almost certainly be the wrong way to run it under supervisor.

Related

bash script that is run from Python reaches sudo timeout

This is a long bash script (400+ lines ) that is originally invoked from a django app like so -
os.system('./bash_script.sh &> bash_log.log')
It stops on a random command in the script. If the order of commands is changed, it hangs on another command in approx. the same location.
sshing to the machine that runs the django app, and running sudo ./bash_script.sh, asks for a password and then runs all the way.
I can't see the message it presents when it hangs in the log file, couldn't make it redirect there. I assume it's a sudo password request.
Tried -
sudo -v in the script - didn't help.
ssh to the machine and manually extend the sudo timeout in /etc/sudoers - didnt help, I think since the django app is already in the air and uses the previos timeout.
splitting the script in two, and running one in separate thread, like so -
def basher(command, log_path):
with open(log_path) as log:
Popen(command, stdout=log, stderr=log).wait()
script_thread = Thread(target=basher, args=('bash_script_pt1.sh', 'bash_log_pt1.log'))
script_thread.start()
os.system('./bash_script_pt2.sh &> bash_log_pt2.log') # I know it's deprecated, not sure if maybe it's better in this case
script_thread.join()
The logs showed that part 1 ended ok, but part 2 still hangs, albeit later in the code than when they were together.
I thought to edit /etc/sudoers from inside the Python code, and then re-login via su - user. There are snippets of how to pass the password using pty, however I don't understand the mechanics of it and could not get it to work.
I also noted that ps aux | grep bash_script.sh shows that the script is being run twice. As -
/bin/bash bash_script.sh
and as
sh -c bash_script.sh.
I assume os.system has an internal shell=True going on.
I don't understand the Linux entities/mechanics in play to figure out what's happening.
My guess is that the django app has different and more limited permissions, than the script itself does, and the script is inheriting said restrictions because it is being executed by it.
You need to find out what permissions the script has when you run it just from bash, and what it has when you run it via django, and then figure out what the difference is.

How can I keep my python-daemon process running or restart it on fail?

I have a python3.9 script I want to have running 24/7. In it, I use python-daemon to keep it running like so:
import daemon
with daemon.DaemonContext():
%%script%%
And it works fine but after a few hours or days, it just crashes randomly. I always start it with sudo but I can't seem to figure out where to find the log file of the daemon process for debugging. What can I do to ensure logging? How can I keep the script running or auto-restart it after crashing?
You can find the full code here.
If you really want to run a script 24/7 in background, the cleanest and easiest way to do it would surely be to create a systemd service.
There are already many descriptions of how to do that, for example here.
One of the advantages of systemd, in addition to being able to launch a service at startup, is to be able to restart it after failure.
Restart=on-failure
If all you want to do is automatically restart the program after a crash, the easiest method would probably be to use a bash script.
You can use the until loop, which is used to execute a given set of commands as long as the given condition evaluates to false.
#!/bin/bash
until python /path/to/script.py; do
echo "The program crashed at `date +%H:%M:%S`. Restarting the script..."
done
If the command returns a non zero exit-status, then the script is restarted.
I would start with familiarizing myself with those two questions:
How to make a Python script run like a service or daemon in Linux
Run a python script with supervisor
Looks like you need a supervisor that will make sure that your script/daemon is still running. You can take a look at supervisord.

How to interact with new shell I logged into from script?

Not able to send commands to shell I logged into
Originally, I wrote a Python script. It was able to send commands like
subprocess.run(['kubectl', 'config', 'get-context'], shell=True)
but when it came time to get to the child shell, in this case bash, the command wouldn't run until I exited that shell and it would say things like it couldn't find the command.
I then tried to do it with the module "sh," but was also unsuccessful
I thought maybe using Python was problem and also realized my ultimate goal was to use a different shell (cypher-shell) and so skipped immediately to that with bash as the parent shell. In there I have a line that is sometimes successful, sometimes not
kubectl run -it --rm cypher-shell --image=gcr.io/cloud-marketplace/neo4j-public/causal-cluster-k8s:3.4 --restart=Never --namespace=default --command -- ./bin/cypher-shell -u neo4j -p "password" -a "domain.name"
But even when it successfully logs in it, it just hangs until I manually exit and then it runs the next commands
Note: I saw this and so, perhaps, it's not a child shell? Run shell command from child shell
I can't say I know exactly what you are doing, but if I understand your objective correctly you want the Python program to continue to log while the script continues to run? The problem is that the logger continues to run and holds up your program. The way I would deal with that would be to run the logger as a background process.
With bash, that would be ./script.sh & which would make it run without holding the rest of the program back from running.
Hopefully that may give you an idea! Good luck.

Watchdog for specific python process

I am working on Ubuntu 16.04 and I have a python running process in the background
python myFunction.py
From time to time, myFunction process gets killed for unknown reason, however, I want to restart it automatically. I have multiple python process running in the background, and I do not know which one runs myFunctions.py (e.g. by using the pgrep command).
Is it possible? Can I make a bash or python script to restart the command python myFunction.py whenever the python process running it gets killed?
You can look at Supervisord which is (from its own documentation) :
a client/server system that allows its users to monitor and control a
number of processes on UNIX-like operating systems
Supervisord will keep your script in check. If it crashes, it will restart it again. If your raspberry reboots it will make sure the scripts starts automically after booting.
It works based on a config file formatted like this (more info in the docs) :
[program:myFunction]
command=/path_to_script/myFunction.py
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/var/log/myFunction.log
stderr_logfile=/var/log/myFunction.error.log
directory=/path_to_script
I hope this will help you

Run Python script forever, logging errors and restarting when crashes

I have a python script that continuously process new data and writes to a mongodb. In the script, its a while loop and a sleep that runs the code continuously.
What is the recommended way to run the Python script forever, logging errors when they occur, and restarting when it crashes?
Will node.js's forever be suitable? I'm also running node/meteor on the same Ubuntu server.
supervisord is perfect for this sort of thing. While I used to check that programs were still running every couple of minutes with a cron job, supervisord runs all programs in an in-process thread, so in the event your program terminates, supervisord will automatically restart the process. I no longer need to parse the output of ps to see if a program crashed.
It has a simple declaritive config file and configurable logging. By default it creates a log file for your-program-name-stderr.log your-program-name-stdout.log which are automatically handled by logrotate when supervisord is installed from an OS package manager (Debian for me).
If you don't want to configure supervisord's logging, you should look at logging in python so you can control what goes into those files.
if you're on a debian derivative you should be able to install and start the daemon simply by executing apt-get install supervisord as root.
The config file is very straightforward too:
[program:myprogram]
command=/path/to/my/program/script
directory=/path/to/my/program/base
user=myuser
autostart=true
autorestart=true
redirect_stderr=True
supervisorctl also allows you to see what your program is doing interactively and can start and stop multiple programs with supervisorctl start myprogram etc
Recently wrote something similar. The basic pattern I follow is
while True:
try:
#functionality
except SpecificError:
#log exception
except: #catch everything else
finally:
time.sleep(600)
to handle reboots you can use init.d or cron jobs.
If you are writing a daemon, you should probably do it with this command:
http://manpages.ubuntu.com/manpages/lucid/man8/start-stop-daemon.8.html
You can spawn this from a System V /etc/init.d/ script, or use Upstart which is slowly replacing it.
Upstart: http://upstart.ubuntu.com/getting-started.html
System V: http://www.cyberciti.biz/tips/linux-write-sys-v-init-script-to-start-stop-service.html
I find System V easier to write, but if this will ever be packaged and distributed in a debian file, I recommend writing an Upstart conf.
Definitely keep the sleep so it won't keep a grip on CPU load.
I don't know if this is still relevant to you, but I have been reading forever about how to do this and want to share somewhere what I did.
For me, the goal was to have a python script running always (on my Linux computer). The python script also has a "while True " loop in it which should theoretically run forever, but if it for any reason I cannot think of would crash, I want the script to restart. Also, when I restart the computer it should run the script.
I am not an expert but for me the best and most understandable was to use systemd (assuming you use Linux).
There are two nice examples of how to do this given here and here, showing how to write your .service files in either /etc/systemd/system or /lib/systemd/system. If you want to be completely correct you should take the former:
" /etc/systemd/system/: units installed by the system administrator" 1
The documentation of systemd here is actually nice to read, even if you are not an expert.
Hope this helps someone!

Categories