Python threading - How to repeatedly execute a function in a separate thread? - python

I have this code:
import threading
def printit():
print ("Hello, World!")
threading.Timer(1.0, printit).start()
threading.Timer(1.0, printit).start()
I am trying to have "Hello, World!" printed every second, however when I run the code nothing happens, the process is just kept alive.
I have read posts where exactly this code worked for people.
I am very confused by how hard it is to set a proper interval in python, since I'm used to JavaScript. I feel like I'm missing something.
Help is appreciated.

I don't see any issue with your current approach. It is working for me me in both Python 2.7 and 3.4.5.
import threading
def printit():
print ("Hello, World!")
# threading.Timer(1.0, printit).start()
# ^ why you need this? However it works with it too
threading.Timer(1.0, printit).start()
which prints:
Hello, World!
Hello, World!
But I'll suggest to start the thread as:
thread = threading.Timer(1.0, printit)
thread.start()
So that you can stop the thread using:
thread.cancel()
Without having the object to Timer class, you will have to shut your interpreter in order to stop the thread.
Alternate Approach:
Personally I prefer to write a timer thread by extending Thread class as:
from threading import Thread, Event
class MyThread(Thread):
def __init__(self, event):
Thread.__init__(self)
self.stopped = event
def run(self):
while not self.stopped.wait(0.5):
print("Thread is running..")
Then start thread with object of Event class as:
my_event = Event()
thread = MyThread(my_event)
thread.start()
You'll start seeing the below output in the screen:
Thread is running..
Thread is running..
Thread is running..
Thread is running..
To stop the thread, execute:
my_event.set()
This provides more flexibility in modifying the changes for the future.

What might be an issue is that you are creating a new thread each time you are running printit.
A better way may be just to create one thread that does whatever you want it to do and then you send an event to stop it when it is finished for some reason:
from threading import Thread,Event
from time import sleep
def threaded_function(evt):
while not evt.is_set():
print "running"
sleep(1)
if __name__ == "__main__":
e=Event()
thread = Thread(target = threaded_function, args = (e, ))
thread.start()
sleep(5)
e.set() # tells the thread to exit
thread.join()
print "thread finished...exiting"

I run it in python 3.6.It works ok as you expected .

I have used Python 3.6.0.
And I have used _thread and time package.
import time
import _thread as t
def a(nothing=0):
print('hi',nothing)
time.sleep(1)
t.start_new_thread(a,(nothing+1,))
t.start_new_thread(a,(1,))#first argument function name and second argument is tuple as a parameterlist.
o/p will be like
hi 1
hi 2
hi 3
....

Related

How to kill threads in Python with CTRL + C [duplicate]

I am testing Python threading with the following script:
import threading
class FirstThread (threading.Thread):
def run (self):
while True:
print 'first'
class SecondThread (threading.Thread):
def run (self):
while True:
print 'second'
FirstThread().start()
SecondThread().start()
This is running in Python 2.7 on Kubuntu 11.10. Ctrl+C will not kill it. I also tried adding a handler for system signals, but that did not help:
import signal
import sys
def signal_handler(signal, frame):
sys.exit(0)
signal.signal(signal.SIGINT, signal_handler)
To kill the process I am killing it by PID after sending the program to the background with Ctrl+Z, which isn't being ignored. Why is Ctrl+C being ignored so persistently? How can I resolve this?
Ctrl+C terminates the main thread, but because your threads aren't in daemon mode, they keep running, and that keeps the process alive. We can make them daemons:
f = FirstThread()
f.daemon = True
f.start()
s = SecondThread()
s.daemon = True
s.start()
But then there's another problem - once the main thread has started your threads, there's nothing else for it to do. So it exits, and the threads are destroyed instantly. So let's keep the main thread alive:
import time
while True:
time.sleep(1)
Now it will keep print 'first' and 'second' until you hit Ctrl+C.
Edit: as commenters have pointed out, the daemon threads may not get a chance to clean up things like temporary files. If you need that, then catch the KeyboardInterrupt on the main thread and have it co-ordinate cleanup and shutdown. But in many cases, letting daemon threads die suddenly is probably good enough.
KeyboardInterrupt and signals are only seen by the process (ie the main thread)... Have a look at Ctrl-c i.e. KeyboardInterrupt to kill threads in python
I think it's best to call join() on your threads when you expect them to die. I've taken the liberty to make the change your loops to end (you can add whatever cleanup needs are required to there as well). The variable die is checked on each pass and when it's True, the program exits.
import threading
import time
class MyThread (threading.Thread):
die = False
def __init__(self, name):
threading.Thread.__init__(self)
self.name = name
def run (self):
while not self.die:
time.sleep(1)
print (self.name)
def join(self):
self.die = True
super().join()
if __name__ == '__main__':
f = MyThread('first')
f.start()
s = MyThread('second')
s.start()
try:
while True:
time.sleep(2)
except KeyboardInterrupt:
f.join()
s.join()
An improved version of #Thomas K's answer:
Defining an assistant function is_any_thread_alive() according to this gist, which can terminates the main() automatically.
Example codes:
import threading
def job1():
...
def job2():
...
def is_any_thread_alive(threads):
return True in [t.is_alive() for t in threads]
if __name__ == "__main__":
...
t1 = threading.Thread(target=job1,daemon=True)
t2 = threading.Thread(target=job2,daemon=True)
t1.start()
t2.start()
while is_any_thread_alive([t1,t2]):
time.sleep(0)
One simple 'gotcha' to beware of, are you sure CAPS LOCK isn't on?
I was running a Python script in the Thonny IDE on a Pi4. With CAPS LOCK on, Ctrl+Shift+C is passed to the keyboard buffer, not Ctrl+C.

signal not handled when multiple threads join [duplicate]

This should be very simple and I'm very surprised that I haven't been able to find this questions answered already on stackoverflow.
I have a daemon like program that needs to respond to the SIGTERM and SIGINT signals in order to work well with upstart. I read that the best way to do this is to run the main loop of the program in a separate thread from the main thread and let the main thread handle the signals. Then when a signal is received the signal handler should tell the main loop to exit by setting a sentinel flag that is routinely being checked in the main loop.
I've tried doing this but it is not working the way I expected. See the code below:
from threading import Thread
import signal
import time
import sys
stop_requested = False
def sig_handler(signum, frame):
sys.stdout.write("handling signal: %s\n" % signum)
sys.stdout.flush()
global stop_requested
stop_requested = True
def run():
sys.stdout.write("run started\n")
sys.stdout.flush()
while not stop_requested:
time.sleep(2)
sys.stdout.write("run exited\n")
sys.stdout.flush()
signal.signal(signal.SIGTERM, sig_handler)
signal.signal(signal.SIGINT, sig_handler)
t = Thread(target=run)
t.start()
t.join()
sys.stdout.write("join completed\n")
sys.stdout.flush()
I tested this in the following two ways:
1)
$ python main.py > output.txt&
[2] 3204
$ kill -15 3204
2)
$ python main.py
ctrl+c
In both cases I expect this written to the output:
run started
handling signal: 15
run exited
join completed
In the first case the program exits but all I see is:
run started
In the second case the SIGTERM signal is seemingly ignored when ctrl+c is pressed and the program doesn't exit.
What am I missing here?
The problem is that, as explained in Execution of Python signal handlers:
A Python signal handler does not get executed inside the low-level (C) signal handler. Instead, the low-level signal handler sets a flag which tells the virtual machine to execute the corresponding Python signal handler at a later point(for example at the next bytecode instruction)
…
A long-running calculation implemented purely in C (such as regular expression matching on a large body of text) may run uninterrupted for an arbitrary amount of time, regardless of any signals received. The Python signal handlers will be called when the calculation finishes.
Your main thread is blocked on threading.Thread.join, which ultimately means it's blocked in C on a pthread_join call. Of course that's not a "long-running calculation", it's a block on a syscall… but nevertheless, until that call finishes, your signal handler can't run.
And, while on some platforms pthread_join will fail with EINTR on a signal, on others it won't. On linux, I believe it depends on whether you select BSD-style or default siginterrupt behavior, but the default is no.
So, what can you do about it?
Well, I'm pretty sure the changes to signal handling in Python 3.3 actually changed the default behavior on Linux so you won't need to do anything if you upgrade; just run under 3.3+ and your code will work as you're expecting. At least it does for me with CPython 3.4 on OS X and 3.3 on Linux. (If I'm wrong about this, I'm not sure whether it's a bug in CPython or not, so you may want to raise it on python-list rather than opening an issue…)
On the other hand, pre-3.3, the signal module definitely doesn't expose the tools you'd need to fix this problem yourself. So, if you can't upgrade to 3.3, the solution is to wait on something interruptible, like a Condition or an Event. The child thread notifies the event right before it quits, and the main thread waits on the event before it joins the child thread. This is definitely hacky. And I can't find anything that guarantees it will make a difference; it just happens to work for me in various builds of CPython 2.7 and 3.2 on OS X and 2.6 and 2.7 on Linux…
abarnert's answer was spot on. I'm still using Python 2.7 however. In order to solve this problem for myself I wrote an InterruptableThread class.
Right now it doesn't allow passing additional arguments to the thread target. Join doesn't accept a timeout parameter either. This is just because I don't need to do that. You can add it if you want. You will probably want to remove the output statements if you use this yourself. They are just there as a way of commenting and testing.
import threading
import signal
import sys
class InvalidOperationException(Exception):
pass
# noinspection PyClassHasNoInit
class GlobalInterruptableThreadHandler:
threads = []
initialized = False
#staticmethod
def initialize():
signal.signal(signal.SIGTERM, GlobalInterruptableThreadHandler.sig_handler)
signal.signal(signal.SIGINT, GlobalInterruptableThreadHandler.sig_handler)
GlobalInterruptableThreadHandler.initialized = True
#staticmethod
def add_thread(thread):
if threading.current_thread().name != 'MainThread':
raise InvalidOperationException("InterruptableThread objects may only be started from the Main thread.")
if not GlobalInterruptableThreadHandler.initialized:
GlobalInterruptableThreadHandler.initialize()
GlobalInterruptableThreadHandler.threads.append(thread)
#staticmethod
def sig_handler(signum, frame):
sys.stdout.write("handling signal: %s\n" % signum)
sys.stdout.flush()
for thread in GlobalInterruptableThreadHandler.threads:
thread.stop()
GlobalInterruptableThreadHandler.threads = []
class InterruptableThread:
def __init__(self, target=None):
self.stop_requested = threading.Event()
self.t = threading.Thread(target=target, args=[self]) if target else threading.Thread(target=self.run)
def run(self):
pass
def start(self):
GlobalInterruptableThreadHandler.add_thread(self)
self.t.start()
def stop(self):
self.stop_requested.set()
def is_stop_requested(self):
return self.stop_requested.is_set()
def join(self):
try:
while self.t.is_alive():
self.t.join(timeout=1)
except (KeyboardInterrupt, SystemExit):
self.stop_requested.set()
self.t.join()
sys.stdout.write("join completed\n")
sys.stdout.flush()
The class can be used two different ways. You can sub-class InterruptableThread:
import time
import sys
from interruptable_thread import InterruptableThread
class Foo(InterruptableThread):
def __init__(self):
InterruptableThread.__init__(self)
def run(self):
sys.stdout.write("run started\n")
sys.stdout.flush()
while not self.is_stop_requested():
time.sleep(2)
sys.stdout.write("run exited\n")
sys.stdout.flush()
sys.stdout.write("all exited\n")
sys.stdout.flush()
foo = Foo()
foo2 = Foo()
foo.start()
foo2.start()
foo.join()
foo2.join()
Or you can use it more like the way threading.thread works. The run method has to take the InterruptableThread object as a parameter though.
import time
import sys
from interruptable_thread import InterruptableThread
def run(t):
sys.stdout.write("run started\n")
sys.stdout.flush()
while not t.is_stop_requested():
time.sleep(2)
sys.stdout.write("run exited\n")
sys.stdout.flush()
t1 = InterruptableThread(run)
t2 = InterruptableThread(run)
t1.start()
t2.start()
t1.join()
t2.join()
sys.stdout.write("all exited\n")
sys.stdout.flush()
Do with it what you will.
I faced the same problem here signal not handled when multiple threads join. After reading abarnert's answer, I changed to Python 3 and solved the problem. But I do like to change all my program to python 3. So, I solved my program by avoiding calling thread join() before signal sent. Below is my code.
It is not very good, but solved my program in python 2.7. My question was marked as duplicated, so I put my solution here.
import threading, signal, time, os
RUNNING = True
threads = []
def monitoring(tid, itemId=None, threshold=None):
global RUNNING
while(RUNNING):
print "PID=", os.getpid(), ";id=", tid
time.sleep(2)
print "Thread stopped:", tid
def handler(signum, frame):
print "Signal is received:" + str(signum)
global RUNNING
RUNNING=False
#global threads
if __name__ == '__main__':
signal.signal(signal.SIGUSR1, handler)
signal.signal(signal.SIGUSR2, handler)
signal.signal(signal.SIGALRM, handler)
signal.signal(signal.SIGINT, handler)
signal.signal(signal.SIGQUIT, handler)
print "Starting all threads..."
thread1 = threading.Thread(target=monitoring, args=(1,), kwargs={'itemId':'1', 'threshold':60})
thread1.start()
threads.append(thread1)
thread2 = threading.Thread(target=monitoring, args=(2,), kwargs={'itemId':'2', 'threshold':60})
thread2.start()
threads.append(thread2)
while(RUNNING):
print "Main program is sleeping."
time.sleep(30)
for thread in threads:
thread.join()
print "All threads stopped."

Celery: Stop an infinite loop running as a background thread

I am trying to utilize threading.Thread with Celery to work like a daemon. On a larger scope, the threads will poll hardware sensors as a part of a web UI-powered thermostat, but narrowed-down, this is the bit I'm stuck on:
from celery import Celery
from celery.signals import worker_init
from celery.signals import worker_process_shutdown
from threading import Thread
from threading import Event
from time import sleep
class ThisClass(object):
def __init__(self):
self.shutdown = Event()
self.thread = Thread(target=self.BackgroundMethod)
def Start(self):
self.thread.start()
def Stop(self):
self.shutdown.set()
def BackgroundMethod(self):
while not self.shutdown.is_set():
print("Hello, world!")
sleep(1)
this_class = ThisClass()
celery_app = Celery("tasks", broker="amqp://guest#localhost//")
#worker_init.connect
def WorkerReady(**kwargs):
this_class.Start()
#worker_process_shutdown.connect
def StopPollingSensors(**kwargs):
this_class.Stop()
This Celery script is supposed to create an instance of ThisClass as this_class, and run this_class.Start() when Celery starts. When Celery is shutting down, it is supposed to call this_class.Stop(), which gracefully exits the Thread in ThisClass and Celery cleanly exits.
However, when I hit Ctrl-C in Celery to signal a SIGINT, this_class's thread continues to run and Celery does not exit, even after multiple SIGTERMs are issued. What confuses me is if I slip a print statement in ThisClass.Stop, I see it. Furthermore, if I add sleep(5); this_class.Stop() after this_class.Start(), the thread starts and stops as expected, and Celery will exit normally when issued a SIGINT.
How am I supposed to terminate threading.Thread instances in a Celery-based script?
Consider creating the Event object externally, and passing it to a Thread subclass. The thread loops on the event object. When an external function calls Event.set(), the thread loop exits out of the while loop cleanly.
import sys, time
from threading import Event, Thread, Timer
class ThisClass2(Thread):
def __init__(self, event):
super(ThisClass2,self).__init__()
self.event = event
def run(self):
print 'thread start'
while not self.event.wait(timeout=1.0):
print("thread: Hello, world!")
print 'thread done'
event2 = Event()
x = ThisClass2(event2)
x.start()
print 'okay'
time.sleep(1)
print 'signaling event'
event2.set()
print 'waiting'
x.join()
Example output:
thread start
okay
thread: Hello, world!
signaling event
waiting
thread done

Cannot kill Python script with Ctrl-C

I am testing Python threading with the following script:
import threading
class FirstThread (threading.Thread):
def run (self):
while True:
print 'first'
class SecondThread (threading.Thread):
def run (self):
while True:
print 'second'
FirstThread().start()
SecondThread().start()
This is running in Python 2.7 on Kubuntu 11.10. Ctrl+C will not kill it. I also tried adding a handler for system signals, but that did not help:
import signal
import sys
def signal_handler(signal, frame):
sys.exit(0)
signal.signal(signal.SIGINT, signal_handler)
To kill the process I am killing it by PID after sending the program to the background with Ctrl+Z, which isn't being ignored. Why is Ctrl+C being ignored so persistently? How can I resolve this?
Ctrl+C terminates the main thread, but because your threads aren't in daemon mode, they keep running, and that keeps the process alive. We can make them daemons:
f = FirstThread()
f.daemon = True
f.start()
s = SecondThread()
s.daemon = True
s.start()
But then there's another problem - once the main thread has started your threads, there's nothing else for it to do. So it exits, and the threads are destroyed instantly. So let's keep the main thread alive:
import time
while True:
time.sleep(1)
Now it will keep print 'first' and 'second' until you hit Ctrl+C.
Edit: as commenters have pointed out, the daemon threads may not get a chance to clean up things like temporary files. If you need that, then catch the KeyboardInterrupt on the main thread and have it co-ordinate cleanup and shutdown. But in many cases, letting daemon threads die suddenly is probably good enough.
KeyboardInterrupt and signals are only seen by the process (ie the main thread)... Have a look at Ctrl-c i.e. KeyboardInterrupt to kill threads in python
I think it's best to call join() on your threads when you expect them to die. I've taken the liberty to make the change your loops to end (you can add whatever cleanup needs are required to there as well). The variable die is checked on each pass and when it's True, the program exits.
import threading
import time
class MyThread (threading.Thread):
die = False
def __init__(self, name):
threading.Thread.__init__(self)
self.name = name
def run (self):
while not self.die:
time.sleep(1)
print (self.name)
def join(self):
self.die = True
super().join()
if __name__ == '__main__':
f = MyThread('first')
f.start()
s = MyThread('second')
s.start()
try:
while True:
time.sleep(2)
except KeyboardInterrupt:
f.join()
s.join()
An improved version of #Thomas K's answer:
Defining an assistant function is_any_thread_alive() according to this gist, which can terminates the main() automatically.
Example codes:
import threading
def job1():
...
def job2():
...
def is_any_thread_alive(threads):
return True in [t.is_alive() for t in threads]
if __name__ == "__main__":
...
t1 = threading.Thread(target=job1,daemon=True)
t2 = threading.Thread(target=job2,daemon=True)
t1.start()
t2.start()
while is_any_thread_alive([t1,t2]):
time.sleep(0)
One simple 'gotcha' to beware of, are you sure CAPS LOCK isn't on?
I was running a Python script in the Thonny IDE on a Pi4. With CAPS LOCK on, Ctrl+Shift+C is passed to the keyboard buffer, not Ctrl+C.

unable to create a thread in python

I have following code which compares user input
import thread,sys
if(username.get_text() == 'xyz' and password.get_text()== '123' ):
thread.start_new_thread(run,())
def run():
print "running client"
start = datetime.now().second
while True:
try:
host ='localhost'
port = 5010
time = abs(datetime.now().second-start)
time = str(time)
print time
client = socket.socket()
client.connect((host,port))
client.send(time)
except socket.error:
pass
If I just call the function run() it works but when I try to create a thread to run this function, for some reason the thread is not created and run() function is not executed I am unable to find any error..
Thanks in advance...
you really should use the threading module instead of thread.
what else are you doing? if you create a thread like this, then the interpreter will exit no matter if the thread is still running or not
for example:
import thread
import time
def run():
time.sleep(2)
print('ok')
thread.start_new_thread(run, ())
--> this produces:
Unhandled exception in thread started by
sys.excepthook is missing
lost sys.stderr
where as:
import threading
import time
def run():
time.sleep(2)
print('ok')
t=threading.Thread(target=run)
t.daemon = True # set thread to daemon ('ok' won't be printed in this case)
t.start()
works as expected. if you don't want to keep the interpreter waiting for the thread, just set daemon=True* on the generated Thread.
*edit: added that in example
thread is a low level library, you should use threading.
from threading import Thread
t = Thread(target=run, args=())
t.start()

Categories