How do you make two routes control a daemon thread in python
flask backend file
from flask import Flask
from time import time,sleep
from threading import Thread
app = Flask(__name__)
def intro():
while True:
sleep(3)
print (f" Current time : {time()}")
#app.route('/startbot')
def start_bot():
global bot_thread
bot_thread = Thread(target=intro, daemon=True)
bot_thread.start()
return "bot started "
#app.route('/stopbot')
def stop_bot():
bot_thread.join()
return
if __name__ == "__main__":
app.run()
When trying to kill the thread the curl request in the terminal does not return back to the console and the thread keeps on printing data to the terminal
the idea I had was that I would declare the variable that holds the reference to the bot_thread and use the routes to control it
to test this I used curl http://localhost:port/startbot and curl http://localhost:port/stopbot
I can start the bot just fine but when I try to kill it, I get the following
NameError: name 'bot_thread' is not defined
Any help and does and don'ts will be very appreciated
take into consideration that after killing the thread a user can create a new one and also be able to kill it
Here is a Minimal Reproducible Example :
from threading import Thread
def intro():
print("hello")
global bot_thread
def start_bot():
bot_thread = Thread(target=intro, daemon=True)
return
def stop_bot():
if bot_thread:
bot_thread.join()
if __name__ == "__main__":
import time
start_bot() # simulating a request on it
time.sleep(1) # some time passes ...
stop_bot() # simulating a request on it
Traceback (most recent call last):
File "/home/stack_overflow/so71056246.py", line 25, in <module>
stop_bot() # simulating a request on it
File "/home/stack_overflow/so71056246.py", line 17, in stop_bot
if bot_thread:
NameError: name 'bot_thread' is not defined
My IDE makes the error visually clear for me : the bot_thread is not used, because it is a local variable, not the global one, although they have the same name. This is a pitfall for Python programmers, see this question or this one for example.
So :
def start_bot():
global bot_thread
bot_thread = Thread(target=intro, daemon=True)
return
but
Traceback (most recent call last):
File "/home/stack_overflow/so71056246.py", line 26, in <module>
stop_bot() # simulating a request on it
File "/home/stack_overflow/so71056246.py", line 19, in stop_bot
bot_thread.join()
File "/usr/lib/python3.9/threading.py", line 1048, in join
raise RuntimeError("cannot join thread before it is started")
RuntimeError: cannot join thread before it is started
Hence :
def start_bot():
global bot_thread
bot_thread = Thread(target=intro, daemon=True)
bot_thread.start()
return
which finally gives :
hello
EDIT
When trying to kill the thread the curl request in the terminal does not return back to the console and the thread keeps on printing data to the terminal
#prometheus the bot_thread runs the intro function. Because it contains an infinite loop (while True) it will never reach the end of the function (implicit return) so the thread is never considered as finished. Because of that, when the main thread tries to join (wait until the thread finishes, then get the result), it waits endlessly because the bot thread is stuck in the loop.
So you have to make it possible to exit the while loop. For example (like in the example I linked in a comment), by using another global variable, a flag, that gets set in the main thread (route stop_bot) and that is checked in the intro loop. Like so :
from time import time, sleep
from threading import Thread
def intro():
global the_bot_should_continue_running
while the_bot_should_continue_running:
print(time())
sleep(1)
global bot_thread
global the_bot_should_continue_running
def start_bot():
global bot_thread, the_bot_should_continue_running
bot_thread = Thread(target=intro, daemon=True)
the_bot_should_continue_running = True # before the `start` !
bot_thread.start()
return
def stop_bot():
if bot_thread:
global the_bot_should_continue_running
the_bot_should_continue_running = False
bot_thread.join()
if __name__ == "__main__":
start_bot() # simulating a request on it
sleep(5.5) # some time passes ...
stop_bot() # simulating a request on it
prints 6 times then exits.
Related
I am building a monitoring system where I once start the application, it will continuously monitor the changes and output the results to the database. For this, I used a while loop for continuous monitoring for the repetitive tasks. Here, I am using a class-level variable as 'True' for the condition in while loop. Here I am stuck with how can I change the flag to 'False' when the application is running in while loop.
Sample code looks as follow:
class Monitor:
def __init__(self):
self.monitor=True
def start(self):
i=0
while(self.monitor):
i+=1
I am running the below lines to run the code in the command line:
>>> from StartStopMontoring import Monitor
>>> m=Monitor()
>>> m.start()
^CTraceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Volumes/DATA/Innominds/WorkSpace/Python/StartStopMontoring.py", line 7, in start
while(self.monitor):
KeyboardInterrupt
>>>
Second-line creates the object and the third line calls the function, then I am unable to set the flag to false since it is in while loop with always true condition is met.
(I had to interrupt the command line to stop the application).
Precisely, How can I set a class level flag to false when while loop is running?
Note: This flag changes happen from external input like user want to stop the system but not when conditions met internally within the loop.
Launch the loop in a different thread so that you don't lose the control of your program.
import threading
class Monitor:
(....)
def start(self):
threading.Thread(target=self.run).start()
def run(self):
i=0
while self.monitor:
i += 1
With below code you can either temporarily halt the monitoring, or quit the monitoring completely.
import threading
import time
class Monitor(threading.Thread):
def __init__(self):
self.b_quit = False
self.monitor = True
self.event = threading.Event() # Whenever this event is set, monitoring should happen
self.event.set() # By default turn on monitoring
threading.Thread.__init__(self)
def run(self):
i = 0
while self.monitor:
eventSet = self.event.wait()
if not self.monitor or self.b_quit:
break
print("---- Thread is active: {}".format(i), end='\r')
i += 1
def begin(self):
self.event.set()
def halt(self):
self.event.clear()
def quit(self):
self.event.set()
self.b_quit = True
obj = Monitor()
obj.start() # Launches a separate thread which can be controlled, based upon calls to begin, halt, and quit
time.sleep(1)
print("Trigger halt from main thread")
obj.halt() # Only paused the monitoring
time.sleep(1)
print("Trigger resume from main thread")
obj.begin() # Resumes the monitoring
time.sleep(1)
print("Trigger quit from main thread")
obj.quit() # Exits the thread completely
By starting the main, I´m starting a thread that keeps a connection to a opcua server alive (and a few things more).
I now want to open a function inside this thread but I don´t want to import everything again (because it takes to long).
In if __name__ == "__main__":
it is working, but when I run a second script goIntoThread.py, it is not working. Obviously because I didn´t import the modules...
What are my options to trigger e.g. thd.doSomethingInThread() without importing everything again?
Thnaks alot!
main.py
import time
def importOnlyMain():
global KeepConnected
from keepConnected import KeepConnected
if __name__ == "__main__":
importOnlyMain()
global thd
thd = KeepConnected()
thd.start()
time.sleep(3)
thd.doSomethingInThread()
def goIntoThread():
print("Going to Thread")
thd.doSomethingInThread()
goIntoThread.py
import main
main.goIntoThread()
Copy Comment: I get the following error:
thd.setBool()
NameError: global name 'thd' is not defined
I have this code:
class ExtendedProcess(multiprocessing.Process):
def __init__(self):
super(ExtendedProcess, self).__init__()
self.stop_request = multiprocessing.Event()
def join(self, timeout=None):
logging.debug("stop request received")
self.stop_request.set()
super(ExtendedProcess, self).join(timeout)
def run(self):
logging.debug("process has started")
while not self.stop_request.is_set():
print "doing something"
logging.debug("proc is stopping")
When I call start() on the process it should be running forever, since self.stop_request() is not set. After some miliseconds join() is being called by itself and breaking run. What is going on!? why is join being called by itself?
Moreover, when I start a debugger and go line by line it's suddenly working fine.... What am I missing?
OK, thanks to ely's answer the reason hit me:
There is a race condition -
new process created...
as it's starting itself and about to run logging.debug("process has started") the main function hits end.
main function calls sys exit and on sys exit python calls for all finished processes to close with join().
since the process didn't actually hit "while not self.stop_request.is_set()" join is called and "self.stop_request.set()". Now stop_request.is_set and the code closes.
As mentioned in the updated question, this is because of a race condition. Below I put an initial example highlighting a simplistic race condition where the race is against the overall program exit, but this could also be caused by other types of scope exits or other general race conditions involving your process.
I copied your class definition and added some "main" code to run it, here's my full listing:
import logging
import multiprocessing
import time
class ExtendedProcess(multiprocessing.Process):
def __init__(self):
super(ExtendedProcess, self).__init__()
self.stop_request = multiprocessing.Event()
def join(self, timeout=None):
logging.debug("stop request received")
self.stop_request.set()
super(ExtendedProcess, self).join(timeout)
def run(self):
logging.debug("process has started")
while not self.stop_request.is_set():
print("doing something")
time.sleep(1)
logging.debug("proc is stopping")
if __name__ == "__main__":
p = ExtendedProcess()
p.start()
while True:
pass
The above code listing runs as expected for me using both Python 2.7.11 and 3.6.4. It loops infinitely and the process never terminates:
ely#eschaton:~/programming$ python extended_process.py
doing something
doing something
doing something
doing something
doing something
... and so on
However, if I instead use this code in my main section, it exits right away (as expected):
if __name__ == "__main__":
p = ExtendedProcess()
p.start()
This exits because the interpreter reaches the end of the program, which in turn triggers automatically destroying the p object as it goes out of scope of the whole program.
Note this could also explain why it works for you in the debugger. That is an interactive programming session, so after you start p, the debugger environment allows you to wait around and inspect it ... it would not be automatically destroyed unless you somehow invoked it within some scope that is exited while stepping through the debugger.
Just to verify the join behavior too, I also tried with this main block:
if __name__ == "__main__":
log = logging.getLogger()
log.setLevel(logging.DEBUG)
p = ExtendedProcess()
p.start()
st_time = time.time()
while time.time() - st_time < 5:
pass
p.join()
print("Finished!")
and it works as expected:
ely#eschaton:~/programming$ python extended_process.py
DEBUG:root:process has started
doing something
doing something
doing something
doing something
doing something
DEBUG:root:stop request received
DEBUG:root:proc is stopping
Finished!
I have a very simple example, it prints out the names, but the problem is, when I press ctrl+C, the program doesn't return to the normal command line interface:
^CStopping
After I only see my cursor blinking, but I can't do anything, so I have to close the window and open it up again.
I'm running Ubuntu 12.10.
that's my code:
import threading
import random
import time
import Queue
import urllib2
import sys
queue = Queue.Queue()
keep_running = True
class MyThread(threading.Thread):
def __init__(self, queue):
threading.Thread.__init__(self)
self.queue = queue
self.names = ['Sophia', 'Irina', 'Tanya', 'Cait', 'Jess']
def run(self):
while keep_running:
time.sleep(0.25)
line = self.names[random.randint(0,len(self.names)-1)]
queue.put(line)
self.queue.task_done()
class Starter():
def __init__(self):
self.queue = queue
t = MyThread(self.queue)
t.start()
self.next()
def next(self):
while True:
time.sleep(0.2)
if not self.queue.empty():
line = self.queue.get()
print line, self.queue.qsize()
else:
print 'waiting for queue'
def main():
try:
Starter()
queue.join()
except KeyboardInterrupt, e:
print 'Stopping'
keep_running = False
sys.exit(1)
main()
Your main problem is that you didn't declare keep_running as global, so main is just creating a local variable with the same name.
If you fix that, it will usually exit on some platforms.
If you want it to always exit on all platforms, you need to do two more things:
join the thread that you created.
protect the shared global variable with a Lock or other sync mechanism.
However, a shared global keep_running flag isn't really needed here anyway. You've already got a queue. Just define a special "shutdown" message you can post on the queue, or use closing the queue as a signal to shutdown.
While we're at it, unless you're trying to simulate a slow network or something, there is no need for that time.sleep in your code. Just call self.queue.get(timeout=0.2). That way, instead of always taking 0.2 seconds to get each entry, it will take up to 0.2 seconds, but as little as 0 if there's already something there.
Your main thread is stuck in Starter.next. The interrupt then is called there and propagates up to the first line of the try statement and is caught, jumping to the except clause before join can be called. Try putting the join call in a finally block (with the sys.exit) or simply moving it to th exception handler
Here's some slimmed down code that demonstrates my use of threading:
import threading
import Queue
import time
def example():
""" used in MainThread as the example generator """
while True:
yield 'asd'
class ThreadSpace:
""" A namespace to be shared among threads/functions """
# set this to True to kill the threads
exit_flag = False
class MainThread(threading.Thread):
def __init__(self, output):
super(MainThread, self).__init__()
self.output = output
def run(self):
# this is a generator that contains a While True
for blah in example():
self.output.put(blah)
if ThreadSpace.exit_flag:
break
time.sleep(0.1)
class LoggerThread(threading.Thread):
def __init__(self, output):
super(LoggerThread, self).__init__()
self.output = output
def run(self):
while True:
data = self.output.get()
print data
def main():
# start the logging thread
logging_queue = Queue.Queue()
logging_thread = LoggerThread(logging_queue)
logging_thread.daemon = True
logging_thread.start()
# launch the main thread
main_thread = MainThread(logging_queue)
main_thread.start()
try:
while main_thread.isAlive():
time.sleep(0.5)
except KeyboardInterrupt:
ThreadSpace.exit_flag = True
if __name__ == '__main__':
main()
I have one main thread which gets data yielded to it from a blocking generator. In the real code, this generator yields network related data it sniffs out over the socket.
I then have a logging, daemon, thread which prints the data to screen.
To exit the program cleanly, I'm catching a KeyboardInterrupt which will set an exit_flag to try - This tells the main thread to return.
9 times out of 10, this will work fine. The program will exit cleanly. However, there's a few occasions when I'll receive the following two errors:
Error 1:
^CTraceback (most recent call last):
File "demo.py", line 92, in <module>
main('')
File "demo.py", line 87, in main
time.sleep(0.5)
KeyboardInterrupt
Error 2:
Exception KeyboardInterrupt in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored
I've run this exact sample code a few times and haven't been able to replicate the errors. The only difference between this and the real code is the example() generator. This, like I said, yields network data from the socket.
Can you see anything wrong with how I'm handling the threads?
KeyboardInterrupts are received by arbitrary threads. If the receiver isn't the main thread, it dies, the main thread is unaffected, ThreadSpace.exit_flag remains false, and the script keeps running.
If you want sigint to work, you can have each thread catch KeyboardInterrupt and call thread.interrupt_main() to get Python to exit, or use the signal module as the official documentation explains.