How to use subprocess to terminate a program which is started at boot?
I ran across this question and found wordsforthewise's answer and tried it but nothing happens.
Wordsforthewise' Answer:
import subprocess as sp
extProc = sp.Popen(['python','myPyScript.py']) # runs myPyScript.py
status = sp.Popen.poll(extProc) # status should be 'None'
sp.Popen.terminate(extProc) # closes the process
status = sp.Popen.poll(extProc) # status should now be something other than 'None' ('1' in my testing)
I have a program /home/pi/Desktop/startUpPrograms/facedetection.py running at boot by a Cronjob and I want to kill it from a flask app route like this.
Assigning program name to extProc = program_name would work? If yes how to assign it?
#app.route("/killFD", methods=['GET', 'POST'])
def killFaceDetector():
#kill code goes here.
Since you say the program is run by cronjob, you will have no handle to the program's PID in Python.
You'll have to iterate over all processes to find the one(s) to kill... or more succinctly, just use the pkill utility, with the -f flag to have it look at the full command line. The following will kill all processes (if your user has the permission to do so) that have facedetection.py in the command line.
import os
os.system('pkill -f facedetection.py')
Related
I'm doing a simple python gui and on button click it will run a simple command:
os.system("C:/cygwin64/bin/bash.exe")
When I look in the console it ran correctly and but my guy freezes and is not responding.
If I run the the command in the console without python it works perfectly and I start cygwin terminal.
If you know what is cygwin is there a better way to start it in the same terminal?
os.system blocks the current thread, you can use os.popen in order to do that in another thread, and it also gives you few methods to detach/read/write etc' that process.
for example,
import os
a = os.popen("python -c 'while True: print(1)'")
will create a new process that will be terminated as soon as you terminate your script.
you can do
for i in a:
print(i)
for example, and it will block the thread as os.system does.
you can a.detach() it whenever you want to terminate the process.
However, os.system
import os
os.system("python -c 'while True: print(1)'")
it will output the 1s forever until you terminate the script.
You can use function Popen in package subprocess. It has many possible arguments that allow you to pipe input to and/or pipe output from the program you are running. But if you just want to execute bash.exe while allowing your original Python program to continue running and eventually wait for the completion of bash.exe, then:
import subprocess
# pass a list of command-line arguments:
p = subprocess.Popen(["C:/cygwin64/bin/bash.exe"])
... # continue executing
# wait for the subprocess (bash.exe) to end:
exit_code = p.wait()
I have some script in Python, which does some work. I want to re-run this script automatically. Also, I want to relaunch it on any crashes/freezes.
I can do something like this:
while True:
try:
main()
except Exception:
os.execv(sys.executable, ['python'] + sys.argv)
But, for unknown reason, this still crashes or freezes one time in few days. So I see crash, write "Python main.py" in cmd and it started, so I don't know why os.execv don't do this work by self. I guess it's because this code is part of this app. So, I prefer some script/app, which will control relaunch in external way. I hope it will be more stable.
So this script should work in this way:
Start any script
Check that process of this script is working, for example check some file time change and control it by process name|ID|etc.
When it dissapears from process list, launch it again
When file changed more than 5 minutes ago, stop process, wait few sec, launch it again.
In general: be cross-platform (Linux/Windows)
not important log all crashes.
I can do this by self (right now working on it), but I'm pretty sure something like this must already be done by somebody, I just can't find it in Google\Github.
UPDATE: added code from the #hansaplast answer to GitHub. Also added some changes to it: relauncher. Feel free to copy/use it.
As it needs to work both in windows and on linux I don't know a way to do that with standard tools, so here's a DIY solution:
from subprocess import Popen
import os
import time
# change into scripts directory
abspath = os.path.abspath(__file__)
dname = os.path.dirname(abspath)
os.chdir(dname)
while True:
p = Popen(['python', 'my_script.py', 'arg1', 'arg2'])
time.sleep(20) # give the program some time to write into logfile
while True:
if p.poll() != None:
print('crashed or regularly terminated')
break
file_age_in_s = time.time() - os.path.getmtime('output.log')
if file_age_in_s > 60:
print('frozen, killing process')
p.kill()
break
time.sleep(1)
print('restarting..')
Explanation:
time.sleep(20): give script 20 seconds to write into the log file
poll(): regularly check if script died (either crashed or regularly terminated, you can check the return value of poll() to differentiate that)
getmtime(): regularly check output.log and check if that was changed the past 60 seconds
time.sleep(1): between every check wait for 1s as otherwise it would eat up too many system resources
The script assumes that the check-script and the run-script are in the same directory. If that is not the case, change the lines beneath "change into scripts directory"
I personally like supervisor daemon, but it has two issues here:
It is only for unix systems
It restarts app only on crashes, not freezes.
But it has simple XML-RPC API, so It makes your job to write an freeze-watchdog app simplier. You could just start your process under supervisor and restart it via supervisor API when you see it freezes.
You could install it via apt install supervisor on ubuntu and write config like this:
[program:main]
user=vladimir
command=python3 /var/local/main/main.py
process_name=%(program_name)s
directory=/var/local/main
autostart=true
autorestart=true
I've got a long running python script that I want to be able to end from another python script. Ideally what I'm looking for is some way of setting a process ID to the first script and being able to see if it is running or not via that ID from the second. Additionally, I'd like to be able to terminate that long running process.
Any cool shortcuts exist to make this happen?
Also, I'm working in a Windows environment.
I just recently found an alternative answer here: Check to see if python script is running
You could get your own PID (Process Identifier) through
import os
os.getpid()
and to kill a process in Unix
import os, signal
os.kill(5383, signal.SIGKILL)
to kill in Windows use
import subprocess as s
def killProcess(pid):
s.Popen('taskkill /F /PID {0}'.format(pid), shell=True)
You can send the PID to the other programm or you could search in the process-list to find the name of the other script and kill it with the above script.
I hope that helps you.
You're looking for the subprocess module.
import subprocess as sp
extProc = sp.Popen(['python','myPyScript.py']) # runs myPyScript.py
status = sp.Popen.poll(extProc) # status should be 'None'
sp.Popen.terminate(extProc) # closes the process
status = sp.Popen.poll(extProc) # status should now be something other than 'None' ('1' in my testing)
subprocess.Popen starts the external python script, equivalent to typing 'python myPyScript.py' in a console or terminal.
The status from subprocess.Popen.poll(extProc) will be 'None' if the process is still running, and (for me) 1 if it has been closed from within this script. Not sure about what the status is if it has been closed another way.
This worked for me under windows 11 and PyQt5:
subprocess.Popen('python3 MySecondApp.py')
Popen.terminate(app)
where app is MyFirstApp.py (the caller script, running) and MySecondApp.py (the called script)
I have written a python script that runs infinite using loops. The script is started inside a screen session. However sometimes, after a few hours or even days, it breaks down for a reason i dont now, because screen session closes when that happends.
I have also created a "watchdog" script with the following code, which also runs inside a screen session:
from subprocess import check_output
import os
import time
import random
time.sleep(20)
def screen_present(name):
try:
var = check_output(["screen -ls; true"],shell=True)
if "."+name+"\t(" in var:
print name+" is running"
else:
print name+" is not running"
print "RESTARTING"
os.system("screen -dmS player python /var/www/updater.py > /dev/null 2> /dev/null & echo $")
except:
return true
while True:
screen_present("updater")
time.sleep(random.uniform(6, 10))
So when i check my scripts after leaving them running a night or so, i sometimes find, that
the screen session with the original code is not there anymore,
because my script must have thrown an exception but i can't find out
which to fix it
the screen session of my watchdog is marked as "dead"
What would you guys do to find the error and guarantee a stable running?
When you start your python process, make it output to a file. Like this:
python myfile.py >> log.txt 2>&1
You will be able to access that file even after it dies.
I'm writing a python script with an infinite while loop that I am running over ssh. I would like the script to terminate when someone kills ssh. For example:
The script (script.py):
while True:
# do something
Will be run as:
ssh foo ./script.py
When I kill the ssh process, I would like the script on the other end to stop running.
I have tried looking for a closed stdout:
while not sys.stdout.closed:
# do something
but this didn't work.
How do I achieve this?
Edit:
The remote machine is a Mac which opens the program in a csh:
502 29352 ?? 0:00.01 tcsh -c python test.py
502 29354 ?? 0:00.04 python test.py
I'm opening the ssh process from a python script like so:
p = Popen(['ssh','foo','./script.py'],stdout=PIPE)
while True:
line = p.stdout.readline()
# etc
EDIT
Proposed Solutions:
Run the script with while os.getppid() != 1
This seems to work on Linux systems, but does not work when the remote machine is running OSX. The problem is that the command is launched in a csh (see above) and so the csh has its parent process id set to 1, but not the script.
Periodically log to stderr
This works, but the script is also run locally, and I don't want to print a heartbeat to stderr.
Run the script in a pseduo tty with ssh -tt.
This does work, but has some weird consequences. Consider the following:
remote_script:
#!/usr/bin/env python
import os
import time
import sys
while True:
print time.time()
sys.stdout.flush()
time.sleep(1)
local_script:
#!/usr/bin/env python
from subprocess import Popen, PIPE
import time
p = Popen(['ssh','-tt','user#foo','remote_script'],stdout=PIPE)
while True:
line = p.stdout.readline().strip()
if line:
print line
else:
break
time.sleep(10)
First of all, the output is really weird, it seems to keep adding tabs or something:
[user#local ~]$ local_script
1393608642.7
1393608643.71
1393608644.71
Connection to foo closed.
Second of all, the program does not quit the first time it receives a SIGINT, i.e. I have to hit Ctrl-C twice in order to kill the local_script.
Okay, I have a solution for you
When the ssh connection closes, the parent process id will change from the pid of the ssh-deamon (the fork that handles your connection) to 1.
Thus the following solves your problem.
#!/usr/local/bin/python
from time import sleep
import os
#os.getppid() returns parent pid
while (os.getppid() != 1):
sleep(1)
pass
Can you confirm this is working in your end too :)
edit
I saw you update.
This is not tested, but to get this idea working on OSX, you may be able to detect if the process of the csh changes. The code below only illustrates an idea and has not been tested. That said i think it would work, but it would not be the most elegant solution. If a cross platform solution using signals could be found, it would be preferred.
def do_stuff():
sleep(1)
if sys.platform == 'darwin':
tcsh_pid = os.getppid()
sshfork_pid = psutil.Process(tcsh_pid).ppid
while (sshfork_pid == psutil.Process(tcsh_pid).ppid)
do_stuff()
elif sys.platform == 'linux':
while (os.getppid() != 1):
sleep(1)
else:
raise Exception("platform not supported")
sys.exit(1)
Have you tried
ssh -tt foo ./script.py
When the terminal connection is lost, the application is supposed to receive SIGHUP signal, so all you have to do is to register a special handler using signal module.
import signal
def MyHandler(self, signum, stackFrame):
errorMessage = "I was stopped by %s" % signum
raise Exception(errorMessage)
# somewhere in the beginning of the __main__:
# registering the handler
signal.signal(signal.SIGHUP, MyHandler)
Note that most likely you'll have to handle some other signals. You can do it in absolutely the same way.
I'd suggest periodically logging to stderr.
This will cause an exception to occur when you no longer have a stderr to write to.
The running script is a child pid of the terminal session. If you close the SSH session properly it will terminate the process. But, another method of going about this is to connect your while loop to another factor and disconnect it from your SSH session.
You can have your script controlled by cron to execute regularly. You can have the while loop have a counter. You can have a sleep command in the loop to control execution. Pretty much anything other than having it connected to your SSH session is valid.
To do this you could use exec & to disconnect instances from your loop.