python ubuntu upstart job needs to restart itself - python

I have a Python script which I'm running as an ubuntu upstart job. I start it from the shell with:
sudo service my-service-name start
In the Python code for the service itself, I need to restart the service in some cases. Here's how I'm doing it:
import subprocess
import shlex
cmd = 'sudo service my-service-name restart'
subprocess.check_output(shlex.split(cmd), stderr=subprocess.STDOUT)
If I run cmd from the shell I can successfully restart my upstart job. If I run it from the job itself, the job stops but never starts again.
Is there some problem with upstart jobs starting themselves in this fashion? If so is there another way to get an upstart job to restart itself?
The reason I'm restarting the job is that I've updated the underlying Python code on disk, and I'd like to have the job restart so that it's running the new code.

You are updating the Python app code from the Python app? That is not a good idea...
Anyway, maybe by using the respawn stanza in the Upstart jobs, setting the signal handler for USR1 to the handler for TERM, then sending yourself the USR1 signal (or just sending yourself the TERM signal, but then you do not know when the auto respawn was for your special purposes or not).

Related

How can I keep my python-daemon process running or restart it on fail?

I have a python3.9 script I want to have running 24/7. In it, I use python-daemon to keep it running like so:
import daemon
with daemon.DaemonContext():
%%script%%
And it works fine but after a few hours or days, it just crashes randomly. I always start it with sudo but I can't seem to figure out where to find the log file of the daemon process for debugging. What can I do to ensure logging? How can I keep the script running or auto-restart it after crashing?
You can find the full code here.
If you really want to run a script 24/7 in background, the cleanest and easiest way to do it would surely be to create a systemd service.
There are already many descriptions of how to do that, for example here.
One of the advantages of systemd, in addition to being able to launch a service at startup, is to be able to restart it after failure.
Restart=on-failure
If all you want to do is automatically restart the program after a crash, the easiest method would probably be to use a bash script.
You can use the until loop, which is used to execute a given set of commands as long as the given condition evaluates to false.
#!/bin/bash
until python /path/to/script.py; do
echo "The program crashed at `date +%H:%M:%S`. Restarting the script..."
done
If the command returns a non zero exit-status, then the script is restarted.
I would start with familiarizing myself with those two questions:
How to make a Python script run like a service or daemon in Linux
Run a python script with supervisor
Looks like you need a supervisor that will make sure that your script/daemon is still running. You can take a look at supervisord.

Watchdog for specific python process

I am working on Ubuntu 16.04 and I have a python running process in the background
python myFunction.py
From time to time, myFunction process gets killed for unknown reason, however, I want to restart it automatically. I have multiple python process running in the background, and I do not know which one runs myFunctions.py (e.g. by using the pgrep command).
Is it possible? Can I make a bash or python script to restart the command python myFunction.py whenever the python process running it gets killed?
You can look at Supervisord which is (from its own documentation) :
a client/server system that allows its users to monitor and control a
number of processes on UNIX-like operating systems
Supervisord will keep your script in check. If it crashes, it will restart it again. If your raspberry reboots it will make sure the scripts starts automically after booting.
It works based on a config file formatted like this (more info in the docs) :
[program:myFunction]
command=/path_to_script/myFunction.py
autostart=true
autorestart=true
redirect_stderr=true
stdout_logfile=/var/log/myFunction.log
stderr_logfile=/var/log/myFunction.error.log
directory=/path_to_script
I hope this will help you

Run rq as daemon on server

In python, I am using rq for the use of background processes. But as I am running same thing on my server also, I want it to run as daemon, apart from unix command does rq provides something to make it daemon, like in ruby we have gem called sidekiq, it provides all the option for running environment, log file or daemon also.
I tried unix command rqworker & but it doesn't seem to be working properly.
In your case nohup rq worker & will fit your needs (it will work in the server even if you close the ssh connection).
But if you really want to run your program as a daemon, have a look to:
https://pypi.python.org/pypi/python-daemon/

Run Python script forever, logging errors and restarting when crashes

I have a python script that continuously process new data and writes to a mongodb. In the script, its a while loop and a sleep that runs the code continuously.
What is the recommended way to run the Python script forever, logging errors when they occur, and restarting when it crashes?
Will node.js's forever be suitable? I'm also running node/meteor on the same Ubuntu server.
supervisord is perfect for this sort of thing. While I used to check that programs were still running every couple of minutes with a cron job, supervisord runs all programs in an in-process thread, so in the event your program terminates, supervisord will automatically restart the process. I no longer need to parse the output of ps to see if a program crashed.
It has a simple declaritive config file and configurable logging. By default it creates a log file for your-program-name-stderr.log your-program-name-stdout.log which are automatically handled by logrotate when supervisord is installed from an OS package manager (Debian for me).
If you don't want to configure supervisord's logging, you should look at logging in python so you can control what goes into those files.
if you're on a debian derivative you should be able to install and start the daemon simply by executing apt-get install supervisord as root.
The config file is very straightforward too:
[program:myprogram]
command=/path/to/my/program/script
directory=/path/to/my/program/base
user=myuser
autostart=true
autorestart=true
redirect_stderr=True
supervisorctl also allows you to see what your program is doing interactively and can start and stop multiple programs with supervisorctl start myprogram etc
Recently wrote something similar. The basic pattern I follow is
while True:
try:
#functionality
except SpecificError:
#log exception
except: #catch everything else
finally:
time.sleep(600)
to handle reboots you can use init.d or cron jobs.
If you are writing a daemon, you should probably do it with this command:
http://manpages.ubuntu.com/manpages/lucid/man8/start-stop-daemon.8.html
You can spawn this from a System V /etc/init.d/ script, or use Upstart which is slowly replacing it.
Upstart: http://upstart.ubuntu.com/getting-started.html
System V: http://www.cyberciti.biz/tips/linux-write-sys-v-init-script-to-start-stop-service.html
I find System V easier to write, but if this will ever be packaged and distributed in a debian file, I recommend writing an Upstart conf.
Definitely keep the sleep so it won't keep a grip on CPU load.
I don't know if this is still relevant to you, but I have been reading forever about how to do this and want to share somewhere what I did.
For me, the goal was to have a python script running always (on my Linux computer). The python script also has a "while True " loop in it which should theoretically run forever, but if it for any reason I cannot think of would crash, I want the script to restart. Also, when I restart the computer it should run the script.
I am not an expert but for me the best and most understandable was to use systemd (assuming you use Linux).
There are two nice examples of how to do this given here and here, showing how to write your .service files in either /etc/systemd/system or /lib/systemd/system. If you want to be completely correct you should take the former:
" /etc/systemd/system/: units installed by the system administrator" 1
The documentation of systemd here is actually nice to read, even if you are not an expert.
Hope this helps someone!

Starting a python script on a remote machine which starts a bash script

I have what I believe to be a fairly unique problem for a script I use to stand up webservers on remote machines.
I have a controller script which after checking a ledger initiates a "builder" script on a remote machine. Part of this builder script calls a bash script which starts a process I want to continue running after both scripts are finished.
My only problem is that the builder script seems to finish (gets to the last line) but doesn't seem to return control to the controller script.
For the record I am using subprocess.call in the controller script (to initiate a ssh call) to start the builder script on the remote machine. I have toyed with various ways of initiating the bash script in the builder script but it seems the builder won't return control to the controller until kill the processes spawned by the bash script.
Things I have tried:
pid=os.spawnl(os.P_NOWAIT,dest+'/start_background_script.sh')
pid=subprocess.Popen([dest+'/start_background_script.sh'])
os.system(dest+'/start_background_script.sh &')
pid=os.spawnl(os.P_NOWAIT,dest+'/start_background_script.sh')
The bash script is written to that you execute it and it backgrounds two processes and then returns control.
Any recommendations?
Sound like a job for fabric to me.
Fabric wraps the handling of shell-calls on remote (and also local) machines for you.

Categories