Is on Python 3 any library to relaunch the script? - python

I have some script in Python, which does some work. I want to re-run this script automatically. Also, I want to relaunch it on any crashes/freezes.
I can do something like this:
while True:
try:
main()
except Exception:
os.execv(sys.executable, ['python'] + sys.argv)
But, for unknown reason, this still crashes or freezes one time in few days. So I see crash, write "Python main.py" in cmd and it started, so I don't know why os.execv don't do this work by self. I guess it's because this code is part of this app. So, I prefer some script/app, which will control relaunch in external way. I hope it will be more stable.
So this script should work in this way:
Start any script
Check that process of this script is working, for example check some file time change and control it by process name|ID|etc.
When it dissapears from process list, launch it again
When file changed more than 5 minutes ago, stop process, wait few sec, launch it again.
In general: be cross-platform (Linux/Windows)
not important log all crashes.
I can do this by self (right now working on it), but I'm pretty sure something like this must already be done by somebody, I just can't find it in Google\Github.
UPDATE: added code from the #hansaplast answer to GitHub. Also added some changes to it: relauncher. Feel free to copy/use it.

As it needs to work both in windows and on linux I don't know a way to do that with standard tools, so here's a DIY solution:
from subprocess import Popen
import os
import time
# change into scripts directory
abspath = os.path.abspath(__file__)
dname = os.path.dirname(abspath)
os.chdir(dname)
while True:
p = Popen(['python', 'my_script.py', 'arg1', 'arg2'])
time.sleep(20) # give the program some time to write into logfile
while True:
if p.poll() != None:
print('crashed or regularly terminated')
break
file_age_in_s = time.time() - os.path.getmtime('output.log')
if file_age_in_s > 60:
print('frozen, killing process')
p.kill()
break
time.sleep(1)
print('restarting..')
Explanation:
time.sleep(20): give script 20 seconds to write into the log file
poll(): regularly check if script died (either crashed or regularly terminated, you can check the return value of poll() to differentiate that)
getmtime(): regularly check output.log and check if that was changed the past 60 seconds
time.sleep(1): between every check wait for 1s as otherwise it would eat up too many system resources
The script assumes that the check-script and the run-script are in the same directory. If that is not the case, change the lines beneath "change into scripts directory"

I personally like supervisor daemon, but it has two issues here:
It is only for unix systems
It restarts app only on crashes, not freezes.
But it has simple XML-RPC API, so It makes your job to write an freeze-watchdog app simplier. You could just start your process under supervisor and restart it via supervisor API when you see it freezes.
You could install it via apt install supervisor on ubuntu and write config like this:
[program:main]
user=vladimir
command=python3 /var/local/main/main.py
process_name=%(program_name)s
directory=/var/local/main
autostart=true
autorestart=true

Related

Issue with script that automatically runs at particular time every day

I want a python file to run automatically at 8am every day going forward. I try to use the library schedule as pointed out in the second answer here, in Windows.
import schedule
import time
def query_fun(t):
print('Do a bunch of things')
print("I'm working...", t)
df.to_csv('C:/Documents/Query_output.csv', encoding='utf-8')
schedule.every().day.at("08:00").do(query_fun, 'It is 08:00')
while True:
schedule.run_pending()
time.sleep(60) # wait one minute
But 8am has come and gone, and the csv file hasn't been updated, and it doesn't look like the script runs when I want it to.
Edit: Based on this, I used pythonw.exe to run the script in the command line: C:\Program Files\Python3.7>pythonw.exe daily_query.py but the script doesn't run when expected.
You took out the key part of the script by commenting it out. How is the script magically supposed to rise up at 8 AM to do something? The point is to always keep it running and trigger at the right time using the mechanism provided by schedule library (running any pending jobs at time T on day D that is). What you are doing right now is just declaring the method and exiting without doing anything.
The point is to keep the script running in background and trigger the function by matching the current time with the time specified, running any pending assigned jobs as per your logic. You run your script in background and forget about it until 8 AM:
nohup python MyScheduledProgram.py &
nohup will take care that your terminal doesn’t get any output printed on it from the program. You can view the output from nohup.out though.
Here you can easily see what the skript does:
schedule.every().day.at("08:00").do(query_fun, 'It is 08:00')
tells the scheduler to run the function if it is 8am.
But the other part of the library is this one:
while True:
schedule.run_pending()
time.sleep(60) # wait one minute
this part checks if it should start a skript, then it waits for 60 seconds, and checks again.
EDIT:
The question was related to a Windows machine, therefore my answer has no point here.
If you are on a linux machine, you should consider using crontabs:
Open a terminal and type
crontab -e
After you selected the editor you wanted (lets take nano) it opens a list, where you can add various entries
just add:
0 8 * * * /usr/bin/python3 /home/path/to/skript.py
Then save with STRG + O and exit nano with STRG + X
The skript will run everyday at 8am, just test the command
/usr/bin/python3 /home/path/to/skript.py
to make sure the skript does not produce an error

Can't kill a running subprocess using Python on Windows

I have a Python script that runs all day long checking time every 60 seconds so it can start/end tasks (other python scripts) at specific periods of the day.
This script is running almost all ok. Tasks are starting at the right time and being open over a new cmd window so the main script can keep running and sampling the time. The only problem is that it just won't kill the tasks.
import os
import time
import signal
import subprocess
import ctypes
freq = 60 # sampling frequency in seconds
while True:
print 'Sampling time...'
now = int(time.time())
#initialize the task.. lets say 8:30am
if ( time.strftime("%H:%M", time.localtime(now)) == '08:30'):
# The following method is used so python opens another cmd window and keeps original script running and sampling time
pro = subprocess.Popen(["start", "cmd", "/k", "python python-task.py"], shell=True)
# kill process attempts.. lets say 11:40am
if ( time.strftime("%H:%M", time.localtime(now)) == '11:40'):
pro.kill() #not working - nothing happens
pro.terminate() #not working - nothing happens
os.kill(pro.pid, signal.SIGINT) #not working - windows error 5 access denied
# Kill the process using ctypes - not working - nothing happens
ctypes.windll.kernel32.TerminateProcess(int(pro._handle), -1)
# Kill process using windows taskkill - nothing happens
os.popen('TASKKILL /PID '+str(pro.pid)+' /F')
time.sleep(freq)
Important Note: the task script python-task.py will run indefinitely. That's exactly why I need to be able to "force" kill it at a certain time while it still running.
Any clues? What am I doing wrong? How to kill it?
You're killing the shell that spawns your sub-process, not your sub-process.
Edit: From the documentation:
The only time you need to specify shell=True on Windows is when the command you wish to execute is built into the shell (e.g. dir or copy). You do not need shell=True to run a batch file or console-based executable.
Warning
Passing shell=True can be a security hazard if combined with untrusted input. See the warning under Frequently Used Arguments for details.
So, instead of passing a single string, pass each argument separately in the list, and eschew using the shell. You probably want to use the same executable for the child as for the parent, so it's usually something like:
pro = subprocess.Popen([sys.executable, "python-task.py"])

subprocess can't successfully restart the targeted python file

I write a program my_test.py to get data from web and store to mysql.
But the program my_test.py collapses a lot (my bad programming skill...) and I try to monitor its status and restart it when it collapses.
I use subprocess modular with the following codes.
import subprocess
import time
p = subprocess.Popen(['python.exe', r'D:\my_test.py'], shell=True)
while True:
try:
stopped = p.poll()
except:
stopped = True
if stopped:
p = subprocess.Popen(['python.exe', r'D:\my_test.py'], shell=True)
time.sleep(60)
But when my_test.py collapses, a windows warning window jumps out to alert me that my_test.py is down and which action I will choose: stop, debug ...
Something like that.
And my_test.py seems frozen by the alert windows and the codes above can't restart it successfully.
Only when I manually close the window by choose 'close', it will restart again.
It there any solution to this problem such that my codes can successfully restart my_test.py when it breaks down?
Sorry for the inconvinience brought by my poor English and thank in advance for your kind advices.
There are two parts in your question:
what to do with the debug dialog. You could try this: How do I disable the 'Debug / Close Application' dialog on Windows Vista?
how to restart the script automatically
How to restart
The priority order:
fix my_test.py, to avoid crashing due to known issues
use a supervisor program to run your script such as upstart or supervisord -- they can restart it automatically if it crashes
write your own supervisor program with its own bugs that you have to maintain
It is best to limit yourself to options 1 and/or 2 if you can find an already-written supervisor program that works on Windows (upstart and supervisord do not work on Windows).
Your current supervisor script could be improved to avoid waiting a minute before restarting the program after it crashes if it has been running more than a minute already:
#!/usr/bin/env python3
import sys
import subprocess
import time
try:
from time import monotonic as timer
except ImportError:
from time import time as timer # time() can be set back
while True:
earliest_next_start = timer() + 60
subprocess.call([sys.executable, r'D:\my_test.py'])
while timer() < earliest_next_start:
time.sleep(max(0, earliest_next_start - timer()))

BaseHTTPServer socket close after python script exit

I already searched for solutions to my questions and found some, but they don't work for me or are very complicated for what I want to achieve.
I have a python (2.7) script that creates 3 BaseHTTPServers using threads. I now want to be able to close the python script from itself and restart it. For this, I create an extra file called "restart_script" with this content:
sleep 2
python2 myScript.py
I then start this script and after that, close my own python script:
os.system("nohup bash restart_script & ")
exit()
This works quite well, the python script closes and the new one pops up 2 seconds later, but the BaseHTTPServers do not come up, the report that the Address is already in use. (socket.error Errno 98).
I initiate the server with:
httpd = server_class((HOST_NAME, PORT_NUMBER), MyHandler)
Then I let it serve forever:
thread.start_new_thread(httpd.serve_forever, tuple())
I alternatively tried this:
httpd_thread = threading.Thread(target=httpd.serve_forever)
httpd_thread.daemon = True
httpd_thread.start()
But this has the same result.
If I kill the script using strg+c and then start it right again right after that, everything works fine. I think as long as I want to restart the script from its own, the old process is still somehow active and I need to somehow disown it so that the sockets can be cleared.
I am running on Linux (Xubuntu).
How can I really really kill my own script and then bring it up again seconds later so that all sockets are closed?
I found an answer to my specific problem.
I just use another script which starts my main program using os.system(). If the script wants to restart, I just close it regularly and the other script just starts it again, over and over...
If I want to actually close my script, I add a file and check in the other script if this file exists..
The restart-helper-script looks like this:
import os, time
cwd = os.getcwd()
#first start --> remove shutdown:
try:
os.remove(os.path.join(cwd, "shutdown"))
except:
pass
while True:
#check if shutdown requested:
if os.path.exists(os.path.join(cwd, "shutdown")):
break
#else start script:
os.system("python2 myMainScript.py")
#after it is done, wait 2 seconds: (just to make sure sockets are closed.. might be optional)
time.sleep(2)

How to know if a running script dies?

So I'm somewhat new to programming and mostly self-taught, so sorry if this question is a bit on the novice side.
I have a python script that runs over long periods (e.g. it downloads pages every few seconds for days at a time.) Sort of a monitoring script for a web app.
Every so often, something will disrupt it, and it'll need restarted. I've gotten these events to a bare minimum but it still happens every few days, and when it does get killed it could be bad news if I don't notice for a few hours.
Right now it's running in a screen session on a VPS.
Could someone point me in the right direction as far as knowing when the script dies / and having it automatically restart?
Would this be something to write in Bash? Or something else? I've never done anything like it before and don't know where to start or even look for information.
You could try supervisord, it's a tool for controlling daemon processes.
You should daemonize your program.
As described in Efficient Python Daemon, you can install and use the python-daemon which implements the well-behaved daemon specification of PEP 3143, "Standard daemon process library".
Create a file mydaemon.py with contents like this:
#!/usr/bin/env python
import daemon
import time
import logging
def do_something():
name = 'mydaemon'
logger = logging.getLogger(name)
handler = logging.FileHandler('/tmp/%s.log' % (name))
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.WARNING)
while True:
try:
time.sleep(5)
with open("/tmp/file-does-not-exist", "r") as f:
f.write("The time is now " + time.ctime())
except Exception, ex:
logger.error(ex)
def run():
with daemon.DaemonContext():
do_something()
if __name__ == "__main__":
run()
To actually run it use:
python mydaemon.py
Which will spawn do_something() within the DaemonContext and then the script mydaemon.py will exit. You can see the running daemon with: pgrep -fl mydaemon.py. This short example will simply log errors to a log file in /tmp/mydaemon.log. You'll need to kill the daemon manually or it will run indefinitely.
To run your own program, just replace the contents of the try block with a call to your code.
I believe a wrapper bash script that executes the python script inside a loop should do the trick.
while true; do
# Execute python script here
echo "Web app monitoring script disrupted ... Restarting script."
done
Hope this helps.
That depends on the kind of failure you want to guard against. If it's just the script crashing, the simplest thing to do would be to wrap your main function in a try/except:
import logging as log
while True:
try:
main()
except:
log.exception("main() crashed")
If something is killing the Python process, it might be simplest to run it in a shell loop:
while sleep 1; do python checker.py; done
And if it's crashing because the machine is going down… well… Quis custodiet ipsos custodes?
However, to answer your question directly: the absolute simplest way to check if it's running from the shell would be to grep the output of ps:
ps | grep "python checker.py" 2>&1 > /dev/null
running=$?
Of course, this isn't fool-proof, but it's generally Good Enough.

Categories