How do I write a function that can start and kill a subrocess in python??
this is my code so far:
import subprocess
import signal
import time
def myfunction(action):
if action == 'start':
print 'Start subrocess'
process = subprocess.Popen("ping google.com", shell=True)
if action == 'stop':
print 'Stop subrocess'
process.send_signal(signal.SIGINT)
myfunction('start')
time.sleep(10)
myfunction('stop')
When I run this code I get this error:
Traceback (most recent call last):
File "test.py", line 15, in <module>
myfunction('stop')
File "test.py", line 11, in myfunction
process.send_signal(signal.SIGINT)
UnboundLocalError: local variable 'process' referenced before assignment
You need to save your subprocess variable and pass it into the function. when you call myfunction('stop'), there's nowhere for the function scope to get process from (thus the UnboundLocalError).
Without the function scope, this should work fine - which shows that your issue is with function scope and not really with process handling:
print 'Start subrocess'
process = subprocess.Popen("ping google.com", shell=True)
time.sleep(10)
print 'Stop subprocess'
process.send_signal(signal.SIGINT)
You need to learn OOP and define MyClass with constructor and destructor.
Assuming you do not need run many copies of process, and to make it more exotic we can use class methods
class MyClass(object):
#classmethod
def start(self)
print 'Start subrocess'
self.process = subprocess.Popen("ping google.com", shell=True)
#classmethod
def stop(self)
self.process.send_signal(signal.SIGINT)
MyClass.start()
MyClass.stop()
This not ideal as it allows you to create several new processes.
Quite often in such cases singleton pattern is used, that insures there is only one process is running yet it is a bit out of fashion.
The minimal fix (keeping myfunction) is to save process in a variable:
import subprocess
import signal
import time
def myfunction(action, process=None):
if action == 'start':
print 'Start subrocess'
process = subprocess.Popen("ping google.com", shell=True)
return process
if action == 'stop':
print 'Stop subrocess'
process.send_signal(signal.SIGINT)
process = myfunction('start')
time.sleep(10)
myfunction('stop', process)
It seems that the problem you are having is due to the fact that process is declared as a local variable within myfunction, and in particular only within the 'start' if statement. This small scope means that when you call myfunction('stop'), the function has no notion of the 'process' variable.
There are several ways around this, but the most intuitive would be for myfunction to return process when one was made, and take one as a parameter when you want to close it. The code would look something like:
import subprocess
import signal
import time
def myfunction(action, process=None):
if action == 'start':
print 'Start subrocess'
process = subprocess.Popen("ping google.com", shell=True)
return process
if action == 'stop':
print 'Stop subrocess'
process.send_signal(signal.SIGTERM)
process = myfunction('start')
time.sleep(10)
myfunction('stop', process)
I have just ran this in 2.7.13 and it works fine
Related
As can be seen in the code below, two multiprocessing runs together, but both have a moment that can ask for an input() in the Terminal, is there any way to pause the other multiprocessing until the answer is given in the Terminal?
File Code_One archaic and simple example to speed up the explanation:
from time import sleep
def main():
sleep(1)
print('run')
sleep(1)
print('run')
sleep(1)
input('Please, give the number:')
File Code_Two archaic and simple example to speed up the explanation:
from time import sleep
def main():
sleep(2)
input('Please, give the number:')
sleep(1)
print('run 2')
sleep(1)
print('run 2')
sleep(1)
print('run 2')
sleep(1)
print('run 2')
sleep(1)
print('run 2')
File Main_Code:
import Code_One
import Code_Two
import multiprocessing
from time import sleep
def main():
while True:
pression = multiprocessing.Process(target=Code_One.main)
xgoals = multiprocessing.Process(target=Code_Two.main)
pression.start()
xgoals.start()
pression.join()
xgoals.join()
print('Done')
sleep(5)
if __name__ == '__main__':
main()
How should I proceed in this situation?
In this example, as it doesn't pause the other multi, whenever it asks for an input this error happens:
input('Please, give the number:')
EOFError: EOF when reading a line
Sure, this is possible. To do it you will need to use some sort of interprocess communication (IPC) mechanism to allow the two processes to coordinate. time.sleep is not the best option though, and there are much more efficient ways of tackling it that are specifically made just for this problem.
Probably the most efficient way is to use a multiprocessing.Event, like this:
import multiprocessing
import sys
import os
def Code_One(event, fno):
proc_name = multiprocessing.current_process().name
print(f'running {proc_name}')
sys.stdin = os.fdopen(fno)
val = input('give proc 1 input: ')
print(f'proc 1 got input: {val}')
event.set()
def Code_Two(event, fno):
proc_name = multiprocessing.current_process().name
print(f'running {proc_name} and waiting...')
event.wait()
sys.stdin = os.fdopen(fno)
val = input('give proc 2 input: ')
print(f'proc 2 got input {val}')
if __name__ == '__main__':
event = multiprocessing.Event()
pression = multiprocessing.Process(name='code_one', target=Code_One, args=(event, sys.stdin.fileno()))
xgoals = multiprocessing.Process(name='code_two', target=Code_Two, args=(event, sys.stdin.fileno()))
xgoals.start()
pression.start()
xgoals.join()
pression.join()
This creates the event object, and the two subprocesses. Event objects have an internal flag that starts out False, and can then be toggled True by any process calling event.set(). If a process calls event.wait() while the flag is False, that process will block until another process calls event.set().
The event is created in the parent process, and passed to each subprocess as an argument. Code_Two begins and calls event.wait(), which blocks until the internal flag in the event is set to True. Code_One executes immediately and then calls event.set(), which sets event's internal flag to True, and allows Code_Two to proceed. At that point both processes have returned and called join, and the program ends.
This is a little hacky because it is also passing the stdin file number from the parent to the child processes. That is necessary because when subprocesses are forked, those file descriptors are closed, so for a child process to read stdin using input it first needs to open the correct input stream (that is what sys.stdin = os.fdopen(fno) is doing). It won't work to just send sys.stdin to the child as another argument, because of the mechanics that Python uses to set up the environment for forked processes (sys.stdin is a IO wrapper object and is not pickleable).
How do you make two routes control a daemon thread in python
flask backend file
from flask import Flask
from time import time,sleep
from threading import Thread
app = Flask(__name__)
def intro():
while True:
sleep(3)
print (f" Current time : {time()}")
#app.route('/startbot')
def start_bot():
global bot_thread
bot_thread = Thread(target=intro, daemon=True)
bot_thread.start()
return "bot started "
#app.route('/stopbot')
def stop_bot():
bot_thread.join()
return
if __name__ == "__main__":
app.run()
When trying to kill the thread the curl request in the terminal does not return back to the console and the thread keeps on printing data to the terminal
the idea I had was that I would declare the variable that holds the reference to the bot_thread and use the routes to control it
to test this I used curl http://localhost:port/startbot and curl http://localhost:port/stopbot
I can start the bot just fine but when I try to kill it, I get the following
NameError: name 'bot_thread' is not defined
Any help and does and don'ts will be very appreciated
take into consideration that after killing the thread a user can create a new one and also be able to kill it
Here is a Minimal Reproducible Example :
from threading import Thread
def intro():
print("hello")
global bot_thread
def start_bot():
bot_thread = Thread(target=intro, daemon=True)
return
def stop_bot():
if bot_thread:
bot_thread.join()
if __name__ == "__main__":
import time
start_bot() # simulating a request on it
time.sleep(1) # some time passes ...
stop_bot() # simulating a request on it
Traceback (most recent call last):
File "/home/stack_overflow/so71056246.py", line 25, in <module>
stop_bot() # simulating a request on it
File "/home/stack_overflow/so71056246.py", line 17, in stop_bot
if bot_thread:
NameError: name 'bot_thread' is not defined
My IDE makes the error visually clear for me : the bot_thread is not used, because it is a local variable, not the global one, although they have the same name. This is a pitfall for Python programmers, see this question or this one for example.
So :
def start_bot():
global bot_thread
bot_thread = Thread(target=intro, daemon=True)
return
but
Traceback (most recent call last):
File "/home/stack_overflow/so71056246.py", line 26, in <module>
stop_bot() # simulating a request on it
File "/home/stack_overflow/so71056246.py", line 19, in stop_bot
bot_thread.join()
File "/usr/lib/python3.9/threading.py", line 1048, in join
raise RuntimeError("cannot join thread before it is started")
RuntimeError: cannot join thread before it is started
Hence :
def start_bot():
global bot_thread
bot_thread = Thread(target=intro, daemon=True)
bot_thread.start()
return
which finally gives :
hello
EDIT
When trying to kill the thread the curl request in the terminal does not return back to the console and the thread keeps on printing data to the terminal
#prometheus the bot_thread runs the intro function. Because it contains an infinite loop (while True) it will never reach the end of the function (implicit return) so the thread is never considered as finished. Because of that, when the main thread tries to join (wait until the thread finishes, then get the result), it waits endlessly because the bot thread is stuck in the loop.
So you have to make it possible to exit the while loop. For example (like in the example I linked in a comment), by using another global variable, a flag, that gets set in the main thread (route stop_bot) and that is checked in the intro loop. Like so :
from time import time, sleep
from threading import Thread
def intro():
global the_bot_should_continue_running
while the_bot_should_continue_running:
print(time())
sleep(1)
global bot_thread
global the_bot_should_continue_running
def start_bot():
global bot_thread, the_bot_should_continue_running
bot_thread = Thread(target=intro, daemon=True)
the_bot_should_continue_running = True # before the `start` !
bot_thread.start()
return
def stop_bot():
if bot_thread:
global the_bot_should_continue_running
the_bot_should_continue_running = False
bot_thread.join()
if __name__ == "__main__":
start_bot() # simulating a request on it
sleep(5.5) # some time passes ...
stop_bot() # simulating a request on it
prints 6 times then exits.
I want to store into a variable the last output of a subprocess after the user performs a Keyboard Interrupt. My problem is mainly with a subprocess without end, i.e. tail in my exemple below. Here is my code:
class Testclass:
def Testdef(self):
try:
global out
print "Tail running"
tail_cmd='tail -f log.Reconnaissance'
proc = subprocess.Popen([tail_cmd], stdout=subprocess.PIPE, shell=True)
(out, err) = proc.communicate()
except KeyboardInterrupt:
print("KeyboardInterrupt received, stopping…")
finally:
print "program output:", out
if __name__ == "__main__":
app = Testclass()
app.Testdef()
Below is its output, which I don't understand at this moment.
Tail running
program output:
Traceback (most recent call last):
File "./2Test.py", line 19, in <module>
app.Testdef()
File "./2Test.py", line 15, in Testdef
print "program output:", out
NameError: global name 'out' is not defined
out not being defined indicates that the process proc.communicate() did not return any values, otherwise it would have populated your tuple (out, err). Now to find out whether the communicate() method was supposed to return or whether, more likely, your keyboard interrupt simply killed it, thus preventing out from being defined.
I assume you imported the subprocess module, but make sure you do that first. I rewrote your program without using global out or the try statements.
import subprocess
class Testclass:
def __init__(self,out): # allows you to pass in the value of out
self.out = out # makes out a member of this class
def Testdef(self):
print("Tail running")
tail_cmd='tail -f log.Reconnaissance'
proc = subprocess.Popen([tail_cmd], stdout=subprocess.PIPE, shell=True)
# Perhaps this is where you want to implement the try:
(self.out, err) = proc.communicate()
# and here the except:
# and here the finally:
if __name__ == "__main__":
app = Testclass(1) # pass 1 (or anything for testing) to the out variable
app.Testdef()
print('%r' % app.out) # print the contents of the out variable
# i get an empty string, ''
So as-is this program runs once. There is nothing in out. I believe to create a meaningful example of the user doing a keyboard interrupt, we need the program to be doing something that can be interrupted. Maybe I can provide an example in the future...
I have this code:
class ExtendedProcess(multiprocessing.Process):
def __init__(self):
super(ExtendedProcess, self).__init__()
self.stop_request = multiprocessing.Event()
def join(self, timeout=None):
logging.debug("stop request received")
self.stop_request.set()
super(ExtendedProcess, self).join(timeout)
def run(self):
logging.debug("process has started")
while not self.stop_request.is_set():
print "doing something"
logging.debug("proc is stopping")
When I call start() on the process it should be running forever, since self.stop_request() is not set. After some miliseconds join() is being called by itself and breaking run. What is going on!? why is join being called by itself?
Moreover, when I start a debugger and go line by line it's suddenly working fine.... What am I missing?
OK, thanks to ely's answer the reason hit me:
There is a race condition -
new process created...
as it's starting itself and about to run logging.debug("process has started") the main function hits end.
main function calls sys exit and on sys exit python calls for all finished processes to close with join().
since the process didn't actually hit "while not self.stop_request.is_set()" join is called and "self.stop_request.set()". Now stop_request.is_set and the code closes.
As mentioned in the updated question, this is because of a race condition. Below I put an initial example highlighting a simplistic race condition where the race is against the overall program exit, but this could also be caused by other types of scope exits or other general race conditions involving your process.
I copied your class definition and added some "main" code to run it, here's my full listing:
import logging
import multiprocessing
import time
class ExtendedProcess(multiprocessing.Process):
def __init__(self):
super(ExtendedProcess, self).__init__()
self.stop_request = multiprocessing.Event()
def join(self, timeout=None):
logging.debug("stop request received")
self.stop_request.set()
super(ExtendedProcess, self).join(timeout)
def run(self):
logging.debug("process has started")
while not self.stop_request.is_set():
print("doing something")
time.sleep(1)
logging.debug("proc is stopping")
if __name__ == "__main__":
p = ExtendedProcess()
p.start()
while True:
pass
The above code listing runs as expected for me using both Python 2.7.11 and 3.6.4. It loops infinitely and the process never terminates:
ely#eschaton:~/programming$ python extended_process.py
doing something
doing something
doing something
doing something
doing something
... and so on
However, if I instead use this code in my main section, it exits right away (as expected):
if __name__ == "__main__":
p = ExtendedProcess()
p.start()
This exits because the interpreter reaches the end of the program, which in turn triggers automatically destroying the p object as it goes out of scope of the whole program.
Note this could also explain why it works for you in the debugger. That is an interactive programming session, so after you start p, the debugger environment allows you to wait around and inspect it ... it would not be automatically destroyed unless you somehow invoked it within some scope that is exited while stepping through the debugger.
Just to verify the join behavior too, I also tried with this main block:
if __name__ == "__main__":
log = logging.getLogger()
log.setLevel(logging.DEBUG)
p = ExtendedProcess()
p.start()
st_time = time.time()
while time.time() - st_time < 5:
pass
p.join()
print("Finished!")
and it works as expected:
ely#eschaton:~/programming$ python extended_process.py
DEBUG:root:process has started
doing something
doing something
doing something
doing something
doing something
DEBUG:root:stop request received
DEBUG:root:proc is stopping
Finished!
I need to do the following in Python. I want to spawn a process (subprocess module?), and:
if the process ends normally, to continue exactly from the moment it terminates;
if, otherwise, the process "gets stuck" and doesn't terminate within (say) one hour, to kill it and continue (possibly giving it another try, in a loop).
What is the most elegant way to accomplish this?
The subprocess module will be your friend. Start the process to get a Popen object, then pass it to a function like this. Note that this only raises exception on timeout. If desired you can catch the exception and call the kill() method on the Popen process. (kill is new in Python 2.6, btw)
import time
def wait_timeout(proc, seconds):
"""Wait for a process to finish, or raise exception after timeout"""
start = time.time()
end = start + seconds
interval = min(seconds / 1000.0, .25)
while True:
result = proc.poll()
if result is not None:
return result
if time.time() >= end:
raise RuntimeError("Process timed out")
time.sleep(interval)
There are at least 2 ways to do this by using psutil as long as you know the process PID.
Assuming the process is created as such:
import subprocess
subp = subprocess.Popen(['progname'])
...you can get its creation time in a busy loop like this:
import psutil, time
TIMEOUT = 60 * 60 # 1 hour
p = psutil.Process(subp.pid)
while 1:
if (time.time() - p.create_time()) > TIMEOUT:
p.kill()
raise RuntimeError('timeout')
time.sleep(5)
...or simply, you can do this:
import psutil
p = psutil.Process(subp.pid)
try:
p.wait(timeout=60*60)
except psutil.TimeoutExpired:
p.kill()
raise
Also, while you're at it, you might be interested in the following extra APIs:
>>> p.status()
'running'
>>> p.is_running()
True
>>>
I had a similar question and found this answer. Just for completeness, I want to add one more way how to terminate a hanging process after a given amount of time: The python signal library
https://docs.python.org/2/library/signal.html
From the documentation:
import signal, os
def handler(signum, frame):
print 'Signal handler called with signal', signum
raise IOError("Couldn't open device!")
# Set the signal handler and a 5-second alarm
signal.signal(signal.SIGALRM, handler)
signal.alarm(5)
# This open() may hang indefinitely
fd = os.open('/dev/ttyS0', os.O_RDWR)
signal.alarm(0) # Disable the alarm
Since you wanted to spawn a new process anyways, this might not be the best soloution for your problem, though.
A nice, passive, way is also by using a threading.Timer and setting up callback function.
from threading import Timer
# execute the command
p = subprocess.Popen(command)
# save the proc object - either if you make this onto class (like the example), or 'p' can be global
self.p == p
# config and init timer
# kill_proc is a callback function which can also be added onto class or simply a global
t = Timer(seconds, self.kill_proc)
# start timer
t.start()
# wait for the test process to return
rcode = p.wait()
t.cancel()
If the process finishes in time, wait() ends and code continues here, cancel() stops the timer. If meanwhile the timer runs out and executes kill_proc in a separate thread, wait() will also continue here and cancel() will do nothing. By the value of rcode you will know if we've timeouted or not. Simplest kill_proc: (you can of course do anything extra there)
def kill_proc(self):
os.kill(self.p, signal.SIGTERM)
Koodos to Peter Shinners for his nice suggestion about subprocess module. I was using exec() before and did not have any control on running time and especially terminating it. My simplest template for this kind of task is the following and I am just using the timeout parameter of subprocess.run() function to monitor the running time. Of course you can get standard out and error as well if needed:
from subprocess import run, TimeoutExpired, CalledProcessError
for file in fls:
try:
run(["python3.7", file], check=True, timeout=7200) # 2 hours timeout
print("scraped :)", file)
except TimeoutExpired:
message = "Timeout :( !!!"
print(message, file)
f.write("{message} {file}\n".format(file=file, message=message))
except CalledProcessError:
message = "SOMETHING HAPPENED :( !!!, CHECK"
print(message, file)
f.write("{message} {file}\n".format(file=file, message=message))