One library that I need to use with python3 (iperf3) needs that the library 'run' function is executed in the main thread.
I'm performing some tests to verify if a new process with the multiprocessing library will let me use the main thread of the process but it seems that with the snippet above I cannot have a new 'main thread' for the process.
What would be the recommended way to run a forked process as the main thread of the new process? Is that possible? Will a system like Celery help with this? I'm planning to run this from a Flask app.
Thanks!
#! /usr/bin/python3
import threading
import multiprocessing as mp
def mp_call():
try:
print("mp_call is mainthread? {}".format(isinstance(threading.current_thread(), threading._MainThread)))
except Exception as e:
print('create iperf e:{}'.format(e))
def thread_call():
try:
print("thread_call is mainthread? {}".format(isinstance(threading.current_thread(), threading._MainThread)))
p = mp.Process(target=mp_call, args=[])
p.daemon = False
p.start()
p.join()
print('Process ended')
except Exception as e:
print('thread e:{}'.format(e))
t = threading.Thread(target=thread_call)
t.daemon = False
t.start()
t.join()
print('Thread ended')
In fact, all threads are dead after fork, you will get a new "main" thread which is your current thread, your checking method is wrong. threading._MainThread is not a public api, use threading.main_thread() instead:
assert threading.current_thread() == threading.main_thread()
because main thread got replaced after subprocess fork, no longer a _MainThread subclass.
Related
Is there a way to make the processes in concurrent.futures.ProcessPoolExecutor terminate if the parent process terminates for any reason?
Some details: I'm using ProcessPoolExecutor in a job that processes a lot of data. Sometimes I need to terminate the parent process with a kill command, but when I do that the processes from ProcessPoolExecutor keep running and I have to manually kill them too. My primary work loop looks like this:
with concurrent.futures.ProcessPoolExecutor(n_workers) as executor:
result_list = [executor.submit(_do_work, data) for data in data_list]
for id, future in enumerate(
concurrent.futures.as_completed(result_list)):
print(f'{id}: {future.result()}')
Is there anything I can add here or do differently to make the child processes in executor terminate if the parent dies?
You can start a thread in each process to terminate when parent process dies:
def start_thread_to_terminate_when_parent_process_dies(ppid):
pid = os.getpid()
def f():
while True:
try:
os.kill(ppid, 0)
except OSError:
os.kill(pid, signal.SIGTERM)
time.sleep(1)
thread = threading.Thread(target=f, daemon=True)
thread.start()
Usage: pass initializer and initargs to ProcessPoolExecutor
with concurrent.futures.ProcessPoolExecutor(
n_workers,
initializer=start_thread_to_terminate_when_parent_process_dies, # +
initargs=(os.getpid(),), # +
) as executor:
This works even if the parent process is SIGKILL/kill -9'ed.
I would suggest two changes:
Use a kill -15 command, which can be handled by the Python program as a SIGTERM signal rather than a kill -9 command.
Use a multiprocessing pool created with the multiprocessing.pool.Pool class, whose terminate method works quite differently than that of the concurrent.futures.ProcessPoolExecutor class in that it will kill all processes in the pool so any tasks that have been submitted and running will be also immediately terminated.
Your equivalent program using the new pool and handling a SIGTERM interrupt would be:
from multiprocessing import Pool
import signal
import sys
import os
...
def handle_sigterm(*args):
#print('Terminating...', file=sys.stderr, flush=True)
pool.terminate()
sys.exit(1)
# The process to be "killed", if necessary:
print(os.getpid(), file=sys.stderr)
pool = Pool(n_workers)
signal.signal(signal.SIGTERM, handle_sigterm)
results = pool.imap_unordered(_do_work, data_list)
for id, result in enumerate(results):
print(f'{id}: {result}')
You could run the script in a kill-cgroup. When you need to kill the whole thing, you can do so by using the cgroup's kill switch. Even a cpu-cgroup will do the trick as you can access the group's pids.
Check this article on how to use cgexec.
this code works in idle3 but in a console(MAC, Windows Linux) thread2 is instantly closing if set to daemon. Is there any explanation for that ? Maybe also a workaround to properly have a daemon thread asking for user input ?
import queue
import threading
import sys
def worker(q):
_text = ''
while _text == '':
_text = q.get()
print('[worker]input was ',_text)
sys.exit()
def dialog(q):
while True:
try:
_select = input('[dialog]enter text:')
if _select != '':
q.put(_select)
except EOFError:
pass
except KeyboardInterrupt:
print("bye")
sys.exit(0)
except Exception as e:
print(e)
sys.exit(1)
if 'esc'.lower() in _select.lower():
sys.exit()
q = queue.Queue()
thread1 = threading.Thread(target=worker,args=(q,))
thread2 = threading.Thread(target=dialog,args=(q,))
thread1.setDaemon(True)
thread2.setDaemon(True)
print('start asking')
thread1.start()
thread2.start()
thanks for any hint on the issue
Normally the child threads die when the main thread exits. The code you've given as example exits directly after starting two child threads. To solve this, you should 'join' the threads back to the main thread. This will make it so the main thread waits for the child threads to die.
thread1.join()
thread2.join()
at the end of your file should solve this problem.
https://docs.python.org/3.5/library/threading.html#threading.Thread.join
Also, why do you want to run this application as daemon?
When you import and use package, this package can run non daemon threads. Until these threads are finished, python cannot exit properly (like with sys.exit(0)). For example imagine that thread t is from some package. When unhandled exception occurs in the main thread, you want to terminate. But this won't exit immediately, it will wait 60s till the thread terminates.
import time, threading
def main():
t = threading.Thread(target=time.sleep, args=(60,))
t.start()
a = 5 / 0
if __name__ == '__main__':
try:
main()
except:
sys.exit(1)
So I came up with 2 things. Replace sys.exit(1) with os._exit(1) or enumerate all threads and make them daemon. Both of them seems to work, but what do you thing is better? os._exit won't flush stdio buffers but setting daemon attribute to threads seems like a hack and maybe it's not guaranteed to work all the time.
import time, threading
def main():
t = thread.Thread(target=time.sleep, args=(60,))
t.start()
a = 5 / 0
if __name__ == '__main__':
try:
main()
except:
for t in threading.enumerate():
if not t.daemon and t.name != "MainThread":
t._daemonic = True
sys.exit(1)
I am writing a python script that needs to run a thread which listens to a network socket.
I'm having trouble with killing it using Ctrl+c using the code below:
#!/usr/bin/python
import signal, sys, threading
THREADS = []
def handler(signal, frame):
global THREADS
print "Ctrl-C.... Exiting"
for t in THREADS:
t.alive = False
sys.exit(0)
class thread(threading.Thread):
def __init__(self):
self.alive = True
threading.Thread.__init__(self)
def run(self):
while self.alive:
# do something
pass
def main():
global THREADS
t = thread()
t.start()
THREADS.append(t)
if __name__ == '__main__':
signal.signal(signal.SIGINT, handler)
main()
Appreciate any advise on how to catch Ctrl+c and terminate the script.
The issue is that after the execution falls off the main thread (after main() returned), the threading module will pause, waiting for the other threads to finish, using locks; and locks cannot be interrupted with signals. This is the case in Python 2.x at least.
One easy fix is to avoid falling off the main thread, by adding an infinite loop that calls some function that sleeps until some action is available, like select.select(). If you don't need the main thread to do anything at all, use signal.pause(). Example:
if __name__ == '__main__':
signal.signal(signal.SIGINT, handler)
main()
while True: # added
signal.pause() # added
It's because signals can only be caught by main thread. And here main thread ended his life long time ago (application is waiting for your thread to finish). Try adding
while True:
sleep(1)
to the end of your main() (and of course from time import sleep at the very top).
or as Kevin said:
for t in THREADS:
t.join(1) # join with timeout. Without timeout signal cannot be caught.
I have following code which compares user input
import thread,sys
if(username.get_text() == 'xyz' and password.get_text()== '123' ):
thread.start_new_thread(run,())
def run():
print "running client"
start = datetime.now().second
while True:
try:
host ='localhost'
port = 5010
time = abs(datetime.now().second-start)
time = str(time)
print time
client = socket.socket()
client.connect((host,port))
client.send(time)
except socket.error:
pass
If I just call the function run() it works but when I try to create a thread to run this function, for some reason the thread is not created and run() function is not executed I am unable to find any error..
Thanks in advance...
you really should use the threading module instead of thread.
what else are you doing? if you create a thread like this, then the interpreter will exit no matter if the thread is still running or not
for example:
import thread
import time
def run():
time.sleep(2)
print('ok')
thread.start_new_thread(run, ())
--> this produces:
Unhandled exception in thread started by
sys.excepthook is missing
lost sys.stderr
where as:
import threading
import time
def run():
time.sleep(2)
print('ok')
t=threading.Thread(target=run)
t.daemon = True # set thread to daemon ('ok' won't be printed in this case)
t.start()
works as expected. if you don't want to keep the interpreter waiting for the thread, just set daemon=True* on the generated Thread.
*edit: added that in example
thread is a low level library, you should use threading.
from threading import Thread
t = Thread(target=run, args=())
t.start()