I'm doing a small project in python/tkinter and I have been looking for a way to check if a process has finished but "without waiting". I have tried with:
process = subprocess.Popen(command)
while process.poll() is None:
print('Running!')
print('Finished!')
or:
process = subprocess.Popen(command)
stdoutdata, stderrdata = process.communicate()
print('Finished!')
Both codes execute the command and print "Finished!" when the process ends, but the main program freezes (waiting) and that's what I want to avoid. I need the GUI to stay functional while the process is running and then run some code right after it finishes. Any help?
It's common that you use the Thread module for that purpose:
For example:
# import Thread
from threading import Thread
import time
# create a function that checks if the process has finished
process = True
def check():
while process:
print('Running')
time.sleep(1) # here you can wait as much as you want without freezing the program
else:
print('Finished')
# call the function with the use of Thread
Thread(target=check).start()
# or if you want to keep a reference to it
t = Thread(target=check)
# you might also want to set thread daemon to True so as the Thread ends when the program closes
t.deamon = True
t.start()
This way when you do process=False the program will end and the output will show 'Finished'
Related
I have gone through the python(3.7) documentation to understand the concept of multiprocessing. For Process object there are two methods terminate() and kill. Terminating a process is done using the SIGTERM, and killing the process uses the SIGKILL signal on Unix.
When we terminate the process and check is_alive() status of the process it gives False. but, when I use kill and check the status for the process it gives is_alive() status True. I don't know why it's giving True after killing the process if it is the same as the terminate process.
def test():
print("in test method")
time.sleep(3)
if __name__ == '__main__':
p1 = Process(target=test)
p1.start() # start process
# kill process after 1 sec
time.sleep(1)
p1.kill()
print(p1.is_alive()) # why process alive status true when kill process
time.sleep(3)
print(p1.is_alive())
Whether you use kill or terminate, these methods only start the termination of the process. You must then wait for the process to fully terminate by using join (if you try to use sleep, you are only guessing as to how long to sleep for the process to be fully terminated):
p1.kill()
p1.join() # wait for p1 to fully complete
print(p1.is_alive()) # prints False
I think the reason of this is that the time between p1.kill() and print(p1.is_alive()) functions is so short. So I think that, the interpreter is trying to print on the screen whether or not the process is still active before this kill command is finished.
For example if you put time.sleep(0.001) between p1.kill() and print(p1.is_alive()) functions, False value will be printed rather than True value.
import time
from multiprocessing import Process
def test():
print("in test method")
time.sleep(3)
if __name__ == '__main__':
p1 = Process(target=test)
p1.start() # start process
# kill process after 1 sec
time.sleep(1)
p1.kill()
time.sleep(0.001) # Wait for p1.kill() function is finished.
print(p1.is_alive()) # why process alive status true when kill process
time.sleep(3)
print(p1.is_alive())
I have a requirement to use python to start a totally independent process. That means even the main process exited, the sub-process can still run.
Just like the shell in Linux:
#./a.out &
then even if the ssh connection is lost, then a.out can still keep running.
I need a similar but unified way across Linux and Windows
I have tried the multiprocessing module
import multiprocessing
import time
def fun():
while True:
print("Hello")
time.sleep(3)
if __name__ == '__main__':
p = multiprocessing.Process(name="Fun", target=fun)
p.daemon = True
p.start()
time.sleep(6)
If I set the p.daemon = True, then the print("Hello") will stop in 6s, just after the main process exited.
But if I set the p.daemon = False, the main process won't exit on time, and if I CTRL+C to force quit the main process, the print("Hello") will also be stopped.
So, is there any way the keep print this "Hello" even the main process has exited?
The multiprocessing module is generally used to split a huge task into multiple sub tasks and run them in parallel to improve performance.
In this case, you would want to use the subprocess module.
You can put your fun function in a seperate file(sub.py):
import time
while True:
print("Hello")
time.sleep(3)
Then you can call it from the main file(main.py):
from subprocess import Popen
import time
if __name__ == '__main__':
Popen(["python", "./sub.py"])
time.sleep(6)
print('Parent Exiting')
The subprocess module can do it. If you have a .py file like this:
from subprocess import Popen
p = Popen([r'C:\Program Files\VideoLAN\VLC\vlc.exe'])
The file will end its run pretty quickly and exit, but vlc.exe will stay open.
In your case, because you want to use another function, you could in principle separate that into another .py file
This question already has answers here:
Read streaming input from subprocess.communicate()
(7 answers)
Closed 8 years ago.
There seem to be a lot of ways to invoke some system process via the command line using python, and then to read the output once the process is complete. For example, the subprocess module can accomplish this via subprocess.Popen(['ls']).
Is there a way to read the output from a command while it is still running?
For example, if I invoke the python script multiply.py:
import time
def multiply(a,b):
newA = a
newB = b
while True: #endless loop
newA = newA+1
newB = newB+1
product = newA * newB
print 'Product: ',product
time.sleep(1)
multiply(40,2)
using something along the lines of subprocess.Popen(['python', 'multiply.py]), is there a way to
Start the process
Silently/actively capture all the output while the process is running
'jump in' at any time to check the contents of the entire output?
The above python script is a model of the sort of process from which I'm interested in capturing output, i.e. there is an endless loop, which prints an output every second; it is this output that I'm interested in actively monitoring/capturing.
Here is an implementation that opens the process on another thread (so you won't have to block the main thread) and communicates back lines using a Queue. The thread is killed on demand using an Event:
#!/usr/bin/env python
from subprocess import Popen, PIPE
from threading import Thread, Event
from Queue import Queue
from time import sleep
# communicates output lines from process to main thread
lines = Queue()
# kills thread
kill = Event()
def runner():
p = Popen(["python", "multiply.py"], stdout=PIPE)
# to get stream behaviour
while p.poll() is None and not kill.isSet():
lines.put(p.stdout.readline())
p.kill()
# in your use case this is redundant unless the process
# is terminated externally - because your process is never done
# after process finished some lines may have been missed
if not kill.isSet():
for line in p.stdout.readlines():
lines.put(line)
# open process on another thread
t = Thread(target=runner)
t.start()
# do stuff - lines aggregated on the queue
print "First run"
sleep(1)
while not lines.empty():
print lines.get()
# move to more important stuff...
sleep(3)
# now check the output we missed
print "Second run"
while not lines.empty():
print lines.get()
# done doing stuff
# tell thread to kill itself
kill.set()
# wait for thread
t.join()
print "done"
I am reading The Python Standard Library by Example and get confused when I arrived page 509.
Up to this point, the example programs have implicitly waited to exit until all threads have completed their work. Programs sometimes spawn a thread as a daemon that runs without blocking the main program from exiting.
but after I run some codes, I get result that is opposite. The code is like this:
#!/usr/bin/env python
# encoding: utf-8
#
# Copyright (c) 2008 Doug Hellmann All rights reserved.
#
"""Creating and waiting for a thread.
"""
#end_pymotw_header
import threading
import time
def worker():
"""thread worker function"""
print 'Worker'
# time.sleep(10000)
return
threads = []
for i in range(5):
t = threading.Thread(target=worker)
threads.append(t)
t.start()
print "main Exit"
and sometime the result is this:
Worker
Worker
WorkerWorker
main Exit
Worker
So I want to ask when will main thread exit in python after it starts several thread?
The main thread will exit whenever it is finished executing all the code in your script that is not started in a separate thread.
Since the t.start() will start the thread and then return to the main thread, the main thread will simply continue to execute until it reaches the bottom of your script and then exit.
Since you started the other threads in a non-daemon mode, they will continue running until they are finished.
If you wanted the main thread to not exit until all the threads have finished, you should explicitly join them to the main thread. The join will cause the thread calling join to wait until the the thread being joined is finished.
for i in range(5):
threads[i].join()
print "main Exit"
As #codesparkle pointed out, the more Pythonic way to write this would be to skip the index variable entirely.
for thread in threads:
thread.join()
print "main Exit"
According to the threading docs:
The entire Python program exits when only daemon threads are left
This agrees with the quote you give, but the slight difference in wording shows the result you get. The 'main thread' exits when you would expect it to. Note that the worker threads keep running at this point - you can see this in the test output you give. So, the main thread has finished, but the whole process is still running because there are other threads still running.
The difference is that if some of those worker threads were daemonised, they would be forcibly killed when the last non-daemon thread finished. If all of the workers were daemon threads, then the entire process would finish - and you would be back at your systems shell prompt - very soon after you print 'main exit', and it would be very rare (though not impossible, owing to race conditions) for any worker to print after that.
Your main thread will exit as soos as for loop completes its execution. Your main thread is starting new asynchronous threads. Which means that it will not wait untill new thread finishes its execution. So in your case main thread will start 5 threads in parallel and exit itself.
Note that Main does not exit when you print main Exit, but after it. Consider this program:
import threading
import time
def worker():
"""thread worker function"""
print 'Worker'
time.sleep(1)
print 'Done'
return
class Exiter(object):
def __init__(self):
self.a = 5.0
print 'I am alive'
def __del__(self):
print 'I am dying'
exiter = Exiter()
threads = []
for i in range(5):
t = threading.Thread(target=worker)
threads.append(t)
t.start()
print "main Exit"
I have created an object whose sole purpose is to print "I am dying" when it is being finalised. I am not deleting it explicitly anywhere, so it will only die when the main thread finishes, that is, when Python starts to kill everything to return memory to the OS.
If you run this you will see that workers are working when the main thread is done, but the objects are still alive. I am dying always comes after all the workers have finished their job.
For example as follow:
class ThreadA(Thread):
def __init__(self, mt):
Thread.__init__(self)
self.mt = mt
def run(self):
print 'T1: sleeping...'
time.sleep(4)
print 'current thread is ', self.isAlive()
print 'main thread is ', self.mt.isAlive()
print 'T1: raising...'
if __name__ == '__main__':
mt = threading.currentThread()
ta = ThreadA(mt)
ta.start()
logging.debug('main end')
>
T1: sleeping...
(MainThread) main end
current thread is True
main thread is False
T1: raising...
you can see the main thread active state is false?
I am creating a Python program that calls an external command periodically. The external command takes a few
seconds to complete. I want to reduce the possibility of the external command terminating
badly by adding a signal handler for SIGINT. Basically, I want SIGINT to attempt to wait until the command
executes before terminating the Python program. The problem is that the external perogram seems to be
getting the SIGINT as well, causing it to end abruptly. I am invoking the command using an external thread, since
the Python documentation for signal mentions that only the main thread receives the signal, according to http://docs.python.org/2/library/signal.html.
Can someone help with this.
Here is a stub of my code. Imagine that the external program is /bin/sleep:
import sys
import time
import threading
import signal
def sleep():
import subprocess
global sleeping
cmd = ['/bin/sleep', '10000']
sleeping = True
p = subprocess.Popen(cmd)
p.wait()
sleeping = False
def sigint_handler(signum, frame):
if sleeping:
print 'busy, will terminate shortly'
while(sleeping): time.sleep(0.5)
sys.exit(0)
else:
print 'clean exit'
sys.exit(0)
sleeping = False
signal.signal(signal.SIGINT, sigint_handler)
while(1):
t1 = threading.Thread(target=sleep)
t1.start()
time.sleep(500)
The expected behavior is that pressing Ctrl+C N seconds after the program starts will result in
it waiting (10000 - N) seconds and then exiting. What is happening is the program immediately terminates.
Thanks!
The problem is the way signal handlers are modified when executing a new process. From POSIX:
A child created via fork(2) inherits a copy of its parent's signal dis‐
positions. During an execve(2), the dispositions of handled signals
are reset to the default; the dispositions of ignored signals are left
unchanged.
So what you need to do is:
Ignore the SIGINT signal
Start the external program
Set the SIGINT handler as desired
That way, the external program will ignore SIGINT.
Of course, this leaves a (very) small time window when your script won't respond to SIGINT. But that's something you'll have to live with.
For example:
sleeping = False
while(1):
t1 = threading.Thread(target=sleep)
signal.signal(signal.SIGINT, signal.SIG_IGN)
t1.start()
signal.signal(signal.SIGINT, sigint_handler)
time.sleep(500)