I'm running a python script from the boot of the system (tried both init.d and systemd). If something goes wrong - it quits, writes problem in a log, but I need to restart it manually. Is there a way to do this check with native tools (e.g. not checking "ps -A" and restart)?
Since you're already using a service manager like systemd to run your script at boot, I would configure that service manager to automatically restart the script when it crashes. (This is a big part of what service managers are for!)
According to the systemd.service man page, you can add something like this to your service file:
Restart=on-failure
Perhaps you can use a try and except block to output the error and restart the software without quitting?
try:
print("a")
except Exception as e:
print(e)
This will stop at the first problem and call the except block. You can input a restart code inside the except block, and run the try block again.
Related
I have a python3.9 script I want to have running 24/7. In it, I use python-daemon to keep it running like so:
import daemon
with daemon.DaemonContext():
%%script%%
And it works fine but after a few hours or days, it just crashes randomly. I always start it with sudo but I can't seem to figure out where to find the log file of the daemon process for debugging. What can I do to ensure logging? How can I keep the script running or auto-restart it after crashing?
You can find the full code here.
If you really want to run a script 24/7 in background, the cleanest and easiest way to do it would surely be to create a systemd service.
There are already many descriptions of how to do that, for example here.
One of the advantages of systemd, in addition to being able to launch a service at startup, is to be able to restart it after failure.
Restart=on-failure
If all you want to do is automatically restart the program after a crash, the easiest method would probably be to use a bash script.
You can use the until loop, which is used to execute a given set of commands as long as the given condition evaluates to false.
#!/bin/bash
until python /path/to/script.py; do
echo "The program crashed at `date +%H:%M:%S`. Restarting the script..."
done
If the command returns a non zero exit-status, then the script is restarted.
I would start with familiarizing myself with those two questions:
How to make a Python script run like a service or daemon in Linux
Run a python script with supervisor
Looks like you need a supervisor that will make sure that your script/daemon is still running. You can take a look at supervisord.
I would like to set some debugging command (like import ipdb; ipdb.set_trace()) that would run debugger in jupyter (I would have to run a HTTP server).
Does anybody know about something like this?
Context: I have a long running tasks that are processed by a scheduler (not interactive mode). I would like to be able to debug such a task while running it the same way.
I need to run code in "detached" (not interactive). And when some
error is detected I would like to run debugger. That's why I've been
thinking about remote debugger/jupyter notebook or whatever. So - by
default there is no debugging session - so I think that PyCharm remote
debugger is not a case.
Contrary to what you might seem to think here, you do not really need to run the code in a "debugging session" to use remote debugging.
Try the following:
Install pydevd in the Python environment for your "detached" code:
pip install pydevd
Within the places in that code, where you would have otherwise used pdb.set_trace, write
import pydevd; pydevd.settrace('your-debugger-hostname-or-ip')
Now whenever your code hits the pydevd.settrace instruction, it will attempt to connect to your debugger server.
You may then launch the debugger server from within Eclipse PyDev or Pycharm, and have the "traced" process connect to you ready for debugging. Read here for more details.
It is, of course, up to you to decide what to do in case of a connection timeout - you can either have your process wait for the debugger forever in a loop, or give up at some point. Here is an example which seems to work for me (ran the service on a remote Linux machine, connected to it via SSH with remote port forwarding, launched the local debug server via Eclipse PyDev under Windows)
import pydevd
import socket
from socket import error
def wait_for_debugger(ex, retries=10):
print("Bam. Connecting to debugger now...")
while True:
try:
pydevd.settrace()
break
except SystemExit:
# pydevd raises a SystemExit on connection failure somewhy
retries -= 1
if not retries: raise ex
print(".. waiting ..")
def main():
print("Hello")
world = 1
try:
raise Exception
except Exception as ex:
wait_for_debugger(ex)
main()
It seems you should start the local debug server before enabling port forwarding, though. Otherwise settrace hangs infinitely, apparently believing it has "connected" when it really hasn't.
There also seems to be a small project named rpcpdb with a similar purpose, however I couldn't get it to work right out of the box so can't comment much (I am convinced that stepping through code in an IDE is way more convenient anyway).
I have read that upstart is obsolete in favor of systemd for raspberry pi 3.
My question is how do I run a python script :
a) forever unless I manually kill it
b) can restart if it dies due to some exception or stop running automatically without any human intervention
my python script itself is already using modules like schedule and while True loops to keep running certain jobs every few seconds.
I am just worried that it will die/stop (which it did) after some indeterminate amount of time.
If it stops, all I want is for it to restart.
Currently, I run the script by double clicking it to open in Python IDLE (2.7) and then run module.
What is the best way to run and open a python script and let it run continuously non-stop and then have it auto restart when it dies / stops for whatever reason?
See this picture where it suddenly stops by itself at 5 plus am
I think you should take a look at Python Supervisor. Supervisor will manage the restart in the event of a crash or even machine re-starts.
http://supervisord.org/
An easier method might be the handle the failure within your script. If it is failing due to some exception, wrap the offending code in a try:except block and handle it gracefully within the script.
That said, this post has the information you need to use systemd to execute a BASH script:
https://unix.stackexchange.com/questions/47695/how-to-write-startup-script-for-systemd
Within your script, you can easily run a python script and catch its return value (when it returns failure in your case) and react appropriately.
Something like this:
#!/bin/bash
python ~/path/to/my/script/myScript.py
if [ $? -ne 0 ] ; then #handle the failure here.
If that won't work either, you can create a script whose sole job is to call the other script and handle its failures, and use systemd to call that script.
I have a python script with GUI (using wxpython). I want to run it continuously on my (k)ubuntu system as a service. In case it exits due to some exception, I need it to restart automatically. I tried upstart but it immediately stops the service as soon as it starts.
Is there a super simple way to do this? (I tried restarting the python script within itself, tried simple shell scripts with infinite loops. But need something robust and reliable.)
Any help is greatly appreciated.
I know you said you tried shell scripts with infinite loops, but did you try using an "outer" Python script that runs perpetually as a service; it could just catch the exceptions and restart the Python GUI script if an exception were to occur.
Something like:
import myGUI
while True:
try:
myGUI.runGUICode() # make sure the execution stays in this loop
except:
pass # or do some recovery, initiallization, and logging here
I have a python script that continuously process new data and writes to a mongodb. In the script, its a while loop and a sleep that runs the code continuously.
What is the recommended way to run the Python script forever, logging errors when they occur, and restarting when it crashes?
Will node.js's forever be suitable? I'm also running node/meteor on the same Ubuntu server.
supervisord is perfect for this sort of thing. While I used to check that programs were still running every couple of minutes with a cron job, supervisord runs all programs in an in-process thread, so in the event your program terminates, supervisord will automatically restart the process. I no longer need to parse the output of ps to see if a program crashed.
It has a simple declaritive config file and configurable logging. By default it creates a log file for your-program-name-stderr.log your-program-name-stdout.log which are automatically handled by logrotate when supervisord is installed from an OS package manager (Debian for me).
If you don't want to configure supervisord's logging, you should look at logging in python so you can control what goes into those files.
if you're on a debian derivative you should be able to install and start the daemon simply by executing apt-get install supervisord as root.
The config file is very straightforward too:
[program:myprogram]
command=/path/to/my/program/script
directory=/path/to/my/program/base
user=myuser
autostart=true
autorestart=true
redirect_stderr=True
supervisorctl also allows you to see what your program is doing interactively and can start and stop multiple programs with supervisorctl start myprogram etc
Recently wrote something similar. The basic pattern I follow is
while True:
try:
#functionality
except SpecificError:
#log exception
except: #catch everything else
finally:
time.sleep(600)
to handle reboots you can use init.d or cron jobs.
If you are writing a daemon, you should probably do it with this command:
http://manpages.ubuntu.com/manpages/lucid/man8/start-stop-daemon.8.html
You can spawn this from a System V /etc/init.d/ script, or use Upstart which is slowly replacing it.
Upstart: http://upstart.ubuntu.com/getting-started.html
System V: http://www.cyberciti.biz/tips/linux-write-sys-v-init-script-to-start-stop-service.html
I find System V easier to write, but if this will ever be packaged and distributed in a debian file, I recommend writing an Upstart conf.
Definitely keep the sleep so it won't keep a grip on CPU load.
I don't know if this is still relevant to you, but I have been reading forever about how to do this and want to share somewhere what I did.
For me, the goal was to have a python script running always (on my Linux computer). The python script also has a "while True " loop in it which should theoretically run forever, but if it for any reason I cannot think of would crash, I want the script to restart. Also, when I restart the computer it should run the script.
I am not an expert but for me the best and most understandable was to use systemd (assuming you use Linux).
There are two nice examples of how to do this given here and here, showing how to write your .service files in either /etc/systemd/system or /lib/systemd/system. If you want to be completely correct you should take the former:
" /etc/systemd/system/: units installed by the system administrator" 1
The documentation of systemd here is actually nice to read, even if you are not an expert.
Hope this helps someone!