Self Restarting a Python Script - python

I have created a watchdog timer for my script (Python 3), which allows me to halt execution if anything goes wrong (not shown in code below). However, I would like to have the ability to restart the script automatically using only Python (no external scripts). The code needs to be cross platform compatible.
I have tried subprocess and execv (os.execv(sys.executable, ['python'] + sys.argv)), however I am seeing very weird functionality on Windows. I open the command line, and run the script ("python myscript.py"). The script stops but does not exit (verified through Task Manager), and it will not restart itself unless I press enter twice. I would like it to work automatically.
Any suggestions? Thanks for your help!
import threading
import time
import subprocess
import os
import sys
if __name__ == '__main__':
print("Starting thread list: " + str(threading.enumerate()))
for _ in range(3):
time.sleep(1)
print("Sleeping")
''' Attempt 1 with subprocess.Popen '''
# child = subprocess.Popen(['python',__file__], shell=True)
''' Attempt 2 with os.execv '''
args = sys.argv[:]
args.insert(0, sys.executable)
if sys.platform == 'win32':
args = ['"%s"' % arg for arg in args]
os.execv(sys.executable, args)
sys.exit()

Sounds like you are using threading in your original script, which explains why your can't break your original script when simply pressing Ctrl+C. In that case, you might want to add a KeyboardInterrupt exception to your script, like this:
from time import sleep
def interrupt_this()
try:
while True:
sleep(0.02)
except KeyboardInterrupt as ex:
# handle all exit procedures and data cleaning
print("[*] Handling all exit procedures...")
After this, you should be able to automatically restart your relevant procedure (even from within the script itself, without any external scripts). Anyway, it's a bit hard to know without seeing the relevant script, so maybe I can be of more help if you share some of it.

Related

How to restart a Python script?

In a program I am writing in python I need to completely restart the program if a variable becomes true, looking for a while I found this command:
while True:
if reboot == True:
os.execv(sys.argv[0], sys.argv)
When executed it returns the error [Errno 8] Exec format error. I searched for further documentation on os.execv, but didn't find anything relevant, so my question is if anyone knows what I did wrong or knows a better way to restart a script (by restarting I mean completely re-running the script, as if it were been opened for the first time, so with all unassigned variables and no thread running).
There are multiple ways to achieve the same thing. Start by modifying the program to exit whenever the flag turns True. Then there are various options, each one with its advantages and disadvantages.
Wrap it using a bash script.
The script should handle exits and restart your program. A really basic version could be:
#!/bin/bash
while :
do
python program.py
sleep 1
done
Start the program as a sub-process of another program.
Start by wrapping your program's code to a function. Then your __main__ could look like this:
def program():
### Here is the code of your program
...
while True:
from multiprocessing import Process
process = Process(target=program)
process.start()
process.join()
print("Restarting...")
This code is relatively basic, and it requires error handling to be implemented.
Use a process manager
There are a lot of tools available that can monitor the process, run multiple processes in parallel and automatically restart stopped processes. It's worth having a look at PM2 or similar.
IMHO the third option (process manager) looks like the safest approach. The other approaches will have edge cases and require implementation from your side to handle edge cases.
This has worked for me. Please add the shebang at the top of your code and os.execv() as shown below
#!/usr/bin/env python3
import os
import sys
if __name__ == '__main__':
while True:
reboot = input('Enter:')
if reboot == '1':
sys.stdout.flush()
os.execv(sys.executable, [sys.executable, __file__] + [sys.argv[0]])
else:
print('OLD')
I got the same "Exec Format Error", and I believe it is basically the same error you get when you simply type a python script name at the command prompt and expect it to execute. On linux it won't work because a path is required, and the execv method is basically encountering the same error.
You could add the pathname of your python compiler, and that error goes away, except that the name of your script then becomes a parameter and must be added to the argv list. To avoid that, make your script independently executable by adding "#!/usr/bin/python3" to the top of the script AND chmod 755.
This works for me:
#!/usr/bin/python3
# this script is called foo.py
import os
import sys
import time
if (len(sys.argv) >= 2):
Arg1 = int(sys.argv[1])
else:
sys.argv.append(None)
Arg1 = 1
print(f"Arg1: {Arg1}")
sys.argv[1] = str(Arg1 + 1)
time.sleep(3)
os.execv("./foo.py", sys.argv)
Output:
Arg1: 1
Arg1: 2
Arg1: 3
.
.
.

How to stop/terminate a python script from running? (once again)

Context:
I have a running python script. It contains many os.system("./executableNane") calls in a loop.
If I press ctrl + C, it just stops the execution of the current ./executableNane and passes to the next one.
Question:
How to stop the execution of the whole script and not only the execution of the current executable called?
Please note that I have read carefully the question/answer here but even with kill I can kill the executable executableNane but not the whole script (that I cannot find using top).
The only way I have to stop the script (without reboot the system) is to continue to press ctrl + C in a loop as well until all the tests are completed.
You can use subprocess and signal handlers to do this. You can also use subprocess to receive and send information via subprocess.PIPE, which you can read more about in the documentation.
The following should be a basic example of what you are looking to do:
import subprocess
import signal
import sys
def signal_handler(sig, frame):
print("You pressed Ctrl+C, stopping.")
print("Signal: {}".format(sig))
print("Frame: {}".format(frame))
sys.exit(123)
# Set up signal handler
signal.signal(signal.SIGINT, signal_handler)
print("Starting.")
while True:
cmd = ['sleep', '10']
p = subprocess.Popen(cmd)
p.wait()
if p.returncode != 0:
print("Command failed.")
else:
print("Command worked.")
The other solution to the question (by #Quillan Kaseman) is much more elegant compared with the solution I have found. All my problems are solved when I press Ctrl + Z instead of Ctrl + C.
Indeed, I have no idea why with Z works and with C does not. (I will try to look for some details later on).

Calling bash script as Python subprocess - Bash falls into endless loop getting bad input

I'm using Python 2.7 with Glade 3.15 to create a GUI that allows click-button execution of a variety of existing bash/cshell scripts maintained by my work team. I'm fairly new to Python, but have managed to get the basic application structure up-and-running. However, certain bash scripts I'm calling will step through multiple user prompts and take input to determine end behavior. The problem I am encountering is when I call a bash script as a python subprocess, the bash script appears to take a null input over-and-over, thus causing the prompts to loop endlessly.
For example:
A bash script that prompts:
"Please enter your 4 digit document number:"
** accept user input in terminal **
"You entered ----, is that correct?
1.) Yes
2.) No "
When called from python, the terminal will press through the the prompts, sending an empty response. Since the bash script loops until affirmative response is received, the result is a terminal endlessly printing:
"You entered ----, is that correct?
1.) Yes
2.) No "
I've tried extensively to find answers, here and elsewhere, regarding this issue, but have not found/developed a solution yet.
My basic python, relative to this problem, is as follows (although I have tried a wide variety of different approaches)
import subprocess
from subprocess import Popen,PIPE
...
# Definition for subprocess calls
def subprocess_cmd(self, command):
process = subprocess.Popen(command, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
process.wait()
(output, err) = process.communicate()
print output
...
# Script-Call Button
def on_btnScript_clicked(self, object, data=None):
self.subprocess_cmd("scriptname_is_here")
I just want to call a subprocess from my python button_click event that kicks-off the bash script in the terminal, and waits for keyboard terminal input to walk-through the prompts, as it would if it were run directly from terminal. Sorry so long - wanted to be thorough and explicit. Thanks in advance for any help.
*****UPDATE****
If I call the subprocess from another standalone python file with the .wait() method, the interaction works as desired. But, when I call the subprocess as a result of the GUI button_click event, with the same arguments and methods, the looping anomaly happens. I think this has to do with my button click event and subprocess_cmd 'function' being defined in my mainDialog class, but I don't know how to separate them while retaining my connection to GUI.
Here is more context for my code
#!/usr/bin/python
# Library Imports
from gi.repository import Gtk
from os import system
import subprocess
from subprocess import Popen,PIPE
import time
try:
import math
except:
print "Math Library Missing"
sys.exit(1)
class mainDialog:
# Build the 'form load' parameters
def __init__(self):
self.gladefile = "test.glade"
self.builder = Gtk.Builder()
self.builder.add_from_file(self.gladefile)
self.builder.connect_signals(self)
self.winMain = self.builder.get_object("winMain")
self.winCptArg = self.builder.get_object("winCptArg")
self.winMsbHelp = self.builder.get_object("winMsbHelp")
self.winCptHelp = self.builder.get_object("winCptHelp")
self.winAiHelp = self.builder.get_object("winAiHelp")
self.winMain.move(2625, 400)
self.winMain.show()
# Definition for subprocess calls
def subprocess_cmd(self, command):
process = subprocess.Popen(command)
process.wait()
...
# Script-Call Button
def on_btnScript_clicked(self, object, data=None):
self.subprocess_cmd("scriptname_is_here")
if __name__ == "__main__":
main = mainDialog()
Gtk.main()
Just use os.system:
from os import system
...
# Definition for subprocess calls
def subprocess_cmd(self, command):
process = system(str(command))
...
# Script-Call Button
def on_btnScript_clicked(self, object, data=None):
self.os.system("echo scriptname_is_here")
The syntax is os.system("executable option parameter").
For example,
os.system("ls -al /home")
Well, if anyone is interested, to achieve what I was intending, I simply left stdin and stdout alone, and applied the .wait() method to the subprocess definition -- but this only works when called from a standalone python script; I haven't been able to retain functionality when connected to GUI button click event.
def subprocess_cmd(self, command):
process = subprocess.Popen(command).wait()
...
def on_btnScript_clicked(self, object, data=None):
self.subprocess_cmd("filepath/scriptname_is_here")
stdin and stdout can be left as default, and standard terminal interaction can be achieved, so long as the subprocess definition is appended with the wait() method.

stop python program when ssh pipe is broken

I'm writing a python script with an infinite while loop that I am running over ssh. I would like the script to terminate when someone kills ssh. For example:
The script (script.py):
while True:
# do something
Will be run as:
ssh foo ./script.py
When I kill the ssh process, I would like the script on the other end to stop running.
I have tried looking for a closed stdout:
while not sys.stdout.closed:
# do something
but this didn't work.
How do I achieve this?
Edit:
The remote machine is a Mac which opens the program in a csh:
502 29352 ?? 0:00.01 tcsh -c python test.py
502 29354 ?? 0:00.04 python test.py
I'm opening the ssh process from a python script like so:
p = Popen(['ssh','foo','./script.py'],stdout=PIPE)
while True:
line = p.stdout.readline()
# etc
EDIT
Proposed Solutions:
Run the script with while os.getppid() != 1
This seems to work on Linux systems, but does not work when the remote machine is running OSX. The problem is that the command is launched in a csh (see above) and so the csh has its parent process id set to 1, but not the script.
Periodically log to stderr
This works, but the script is also run locally, and I don't want to print a heartbeat to stderr.
Run the script in a pseduo tty with ssh -tt.
This does work, but has some weird consequences. Consider the following:
remote_script:
#!/usr/bin/env python
import os
import time
import sys
while True:
print time.time()
sys.stdout.flush()
time.sleep(1)
local_script:
#!/usr/bin/env python
from subprocess import Popen, PIPE
import time
p = Popen(['ssh','-tt','user#foo','remote_script'],stdout=PIPE)
while True:
line = p.stdout.readline().strip()
if line:
print line
else:
break
time.sleep(10)
First of all, the output is really weird, it seems to keep adding tabs or something:
[user#local ~]$ local_script
1393608642.7
1393608643.71
1393608644.71
Connection to foo closed.
Second of all, the program does not quit the first time it receives a SIGINT, i.e. I have to hit Ctrl-C twice in order to kill the local_script.
Okay, I have a solution for you
When the ssh connection closes, the parent process id will change from the pid of the ssh-deamon (the fork that handles your connection) to 1.
Thus the following solves your problem.
#!/usr/local/bin/python
from time import sleep
import os
#os.getppid() returns parent pid
while (os.getppid() != 1):
sleep(1)
pass
Can you confirm this is working in your end too :)
edit
I saw you update.
This is not tested, but to get this idea working on OSX, you may be able to detect if the process of the csh changes. The code below only illustrates an idea and has not been tested. That said i think it would work, but it would not be the most elegant solution. If a cross platform solution using signals could be found, it would be preferred.
def do_stuff():
sleep(1)
if sys.platform == 'darwin':
tcsh_pid = os.getppid()
sshfork_pid = psutil.Process(tcsh_pid).ppid
while (sshfork_pid == psutil.Process(tcsh_pid).ppid)
do_stuff()
elif sys.platform == 'linux':
while (os.getppid() != 1):
sleep(1)
else:
raise Exception("platform not supported")
sys.exit(1)
Have you tried
ssh -tt foo ./script.py
When the terminal connection is lost, the application is supposed to receive SIGHUP signal, so all you have to do is to register a special handler using signal module.
import signal
def MyHandler(self, signum, stackFrame):
errorMessage = "I was stopped by %s" % signum
raise Exception(errorMessage)
# somewhere in the beginning of the __main__:
# registering the handler
signal.signal(signal.SIGHUP, MyHandler)
Note that most likely you'll have to handle some other signals. You can do it in absolutely the same way.
I'd suggest periodically logging to stderr.
This will cause an exception to occur when you no longer have a stderr to write to.
The running script is a child pid of the terminal session. If you close the SSH session properly it will terminate the process. But, another method of going about this is to connect your while loop to another factor and disconnect it from your SSH session.
You can have your script controlled by cron to execute regularly. You can have the while loop have a counter. You can have a sleep command in the loop to control execution. Pretty much anything other than having it connected to your SSH session is valid.
To do this you could use exec & to disconnect instances from your loop.

Efficient Python Daemon

I was curious how you can run a python script in the background, repeating a task every 60 seconds. I know you can put something in the background using &, is that effeictive for this case?
I was thinking of doing a loop, having it wait 60s and loading it again, but something feels off about that.
Rather than writing your own daemon, use python-daemon instead! python-daemon implements the well-behaved daemon specification of PEP 3143, "Standard daemon process library".
I have included example code based on the accepted answer to this question, and even though the code looks almost identical, it has an important fundamental difference. Without python-daemon you would have to use & to put your process in the background and nohup and to keep your process from getting killed when you exit your shell. Instead this will automatically detach from your terminal when you run the program.
For example:
import daemon
import time
def do_something():
while True:
with open("/tmp/current_time.txt", "w") as f:
f.write("The time is now " + time.ctime())
time.sleep(5)
def run():
with daemon.DaemonContext():
do_something()
if __name__ == "__main__":
run()
To actually run it:
python background_test.py
And note the absence of & here.
Also, this other stackoverflow answer explains in detail the many benefits of using python-daemon.
I think your idea is pretty much exactly what you want. For example:
import time
def do_something():
with open("/tmp/current_time.txt", "w") as f:
f.write("The time is now " + time.ctime())
def run():
while True:
time.sleep(60)
do_something()
if __name__ == "__main__":
run()
The call to time.sleep(60) will put your program to sleep for 60 seconds. When that time is up, the OS will wake up your program and run the do_something() function, then put it back to sleep. While your program is sleeping, it is doing nothing very efficiently. This is a general pattern for writing background services.
To actually run this from the command line, you can use &:
$ python background_test.py &
When doing this, any output from the script will go to the same terminal as the one you started it from. You can redirect output to avoid this:
$ python background_test.py >stdout.txt 2>stderr.txt &
Using & in the shell is probably the dead simplest way as Greg described.
If you really want to create a powerful Daemon though, you will need to look into the os.fork() command.
The example from Wikipedia:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os, time
def createDaemon():
"""
This function create a service/Daemon that will execute a det. task
"""
try:
# Store the Fork PID
pid = os.fork()
if pid > 0:
print 'PID: %d' % pid
os._exit(0)
except OSError, error:
print 'Unable to fork. Error: %d (%s)' % (error.errno, error.strerror)
os._exit(1)
doTask()
def doTask():
"""
This function create a task that will be a daemon
"""
# Open the file in write mode
file = open('/tmp/tarefa.log', 'w')
# Start the write
while True:
print >> file, time.ctime()
file.flush()
time.sleep(2)
# Close the file
file.close()
if __name__ == '__main__':
# Create the Daemon
createDaemon()
And then you could put whatever task you needed inside the doTask() block.
You wouldn't need to launch this using &, and it would allow you to customize the execution a little further.

Categories