here's some example code
while True: #main-loop
if command_received:
thread = Thread(target = doItNOW)
thread.start()
......
def doItNOW():
some_blocking_operations()
my problem is I need "some_blocking_operations" to start IMMEDIATELY (as soon as command_received is True).
but since they're blocking i can't execute them on my main loop
and i can't change "some_blocking_operations" to be non-blocking either
for "IMMEDIATELY" i mean as soon as possible, not more than 10ms delay.
(i once got a whole second of delay).
if it's not possible, a constant delay would also be acceptable. (but it MUST be constant. with very few milliseconds of error)
i'm currently working on a linux system (Ubuntu, but it may be another one in the future. always linux)
a python solution would be amazing.. but a different one would be better than nothing
any ideas?
thanks in advance
from threading import Thread
class worker(Thread):
def __init__(self, someParameter=True):
Thread.__init__(self)
# This is how you "send" parameters/variables
# into the thread on start-up that the thread can use.
# This is just an example in case you need it.
self.someVariable = someParameter
self.start() # Note: This makes the thread self-starting,
# you could also call .start() in your main loop.
def run():
# And this is how you use the variable/parameter
# that you passed on when you created the thread.
if self.someVariable is True:
some_blocking_operations()
while True: #main-loop
if command_received:
worker()
This is a non-blocking execution of some_blocking_operations() in a threaded manner. I'm not sure if what you're looking for is to actually wait for the thread to finish or not, or if you even care?
If all you want to do is wait for a "command" to be received and then to execute the blocking operations without waiting for it, verifying that it completes, then this should work for you.
Python mechanics of threading
Python will only run in one CPU core, what you're doing here is simply running multiple executions on overlapping clock invervals in the CPU. Meaning every other cycle in the CPU an execution will be made in the main thread, and the other your blocking call will get a chance to run a execution. They won't actually run in parallel.
There are are some "You can, but..." threads.. Like this one:
is python capable of running on multiple cores?
Related
I am trying to call a thread I define in a function from another function. Here is the first function, its purpose is to create and start a thread:
def startThread(func):
listen = threading.Thread(target = func)
listen.start()
I am trying to implement a function that will close the thread created in that first function, how should I go about it? I don't know how to successfully pass the thread.
def endThread(thread):
thread.exit()
Thank you!
This problem is almost FAQ material.
To summarise, there is no way to kill a thread from the outside. You can of course pass the thread object to any function you want, but threading library is missing kill and exit calls.
There are more or less two distinct ways around this, depending on what your thread does.
The first method is to make it so that your thread co-operates. This approach is discussed here: Is there any way to kill a Thread in Python? This method adds a check to your thread loop and a way to raise a "stop signal", which will then cause the thread to exit from the inside when detected.
This method works fine if your thread is a relatively busy loop. If it is something that is blocking in IO wait, not so much, as your thread could be blocking in a read call for days or weeks before receiving something and executing the signal check part. Many IO calls accept a timeout value, and if it is acceptable to wait a couple of seconds before your thread exits, you can use this to force the exit check every N seconds without making your thread a busy loop.
The other approach is to replace threads with processes. You can force kill a subprocess. If you can communicate with your main program with queues instead of shared variables, this is not too complicated, either. If your program relies heavily on sharing global variables, this would require a major redesign.
If your program is waiting in IO loops, you need instantaneous termination and you are using shared global variables, then you are somewhat out of luck, as you either need to accept your threads not behaving nicely or you need to redesign some parts of your code to untangle either the IO wait or shared variables.
I'm making a virtual assistant using Python.
For that I need one main thread running continuously which is required for Speech recognition and want to run other threads like actions after detecting speech to run in background.
For the tasks like timer, I want to make it running in background while the main thread is running, so that I can perform other tasks even when the timer is running... and after reaching the time it should return it as tts to main thread
current structure I'm using is
main.py
-> class Main()
->Running logger in background // which is meant to exit with mainLoop
-> and Command() loop for speech recognition continuously
->`which links to Brain.py to timer.py
A few words about multithreading vs multiprocessing:
In multithreading you start a thread in the current process. Python runs (through the global interpreter lock) threads in short sequential order, never really in parallel. The upside is that threads can access the same variables (i.e. share memory).
On the other side in multiprocessing you run a new process (in the OS it appears as a separate program). They can really run in parallel but sharing variables is a lot more tricky (and much slower as well).
For your use case it seems that not two things are "CPU bound" i.e. it will not be the case that two things will need the CPU for 100% at the same time. In this situation, multithreading is probably the better solution, that is you should go for James Lim's solution.
If you still want to go for multiprocessing, then the following code could be your basic setup for timer. For the speech recognition function it would be accordingly (espeically the part about returning the list should be sufficient for returning tts from the speech recognition):
import multiprocessing
import time
def timer_with_return(seconds, return_list):
time.sleep(seconds)
return_list.append('foo')
if __name__ == "__main__":
# variables by manager are shared among process
# e.g. for return values
manager = multiprocessing.Manager()
timer_return = manager.list()
timer = multiprocessing.Process(target=timer_with_return, args=(3, timer_return))
timer.start()
while True:
time.sleep(1)
if not timer.is_alive():
break
print("timer is still running")
timer.join() # make sure the process is really finished
print("timer finished, return value is {}".format(timer_return))
Running this produces:
timer is still running
timer is still running
timer is still running
timer finished, return value is ['foo']
I am creating a custom job scheduler with a web frontend in python 3.4 on linux. This program creates a daemon (consumer) thread that waits for jobs to come available in a PriorityQueue. These jobs can manually be added through the web interface which adds them to the queue. When the consumer thread finds a job, it executes a program using subprocess.run, and waits for it to finish.
The basic idea of the worker thread:
class Worker(threading.Thread):
def __init__(self, queue):
self.queue = queue
# more code here
def run(self):
while True:
try:
job = self.queue.get()
#do some work
proc = subprocess.run("myprogram", timeout=my_timeout)
#do some more things
except TimeoutExpired:
#do some administration
self.queue.add(job)
However:
This consumer should be able to receive some kind of signal from the frontend (main thread) that it should stop the current job and instead work on the next job in the queue (saving the state of the current job and adding it to the end of the queue again). This can (and will most likely) happen while blocked on subprocess.run().
The subprocesses can simply be killed (the program that is executed saves sme state in a file) but the worker thread needs to do some administration on the killed job to make sure it can be resumed later on.
There can be multiple such worker threads.
Signal handlers are not an option (since they are always handled by the main thread which is a webserver and should not be bothered with this).
Having an event loop in which the process actively polls for events (such as the child exiting, the timeout occurring or the interrupt event) is in this context not really a solution but an ugly hack. The jobs are performance-heavy and constant context switches are unwanted.
What synchronization primitives should I use to interrupt this thread or to make sure it waits for several events at the same time in a blocking fashion?
I think you've accidentally glossed over a simple solution: your second bullet point says that you have the ability to kill the programs that are running in subprocesses. Notice that subprocess.call returns the return code of the subprocess. This means that you can let the main thread kill the subprocess, and just check the return code to see if you need to do any cleanup. Even better, you could use subprocess.check_call instead, which will raise an exception for you if the returncode isn't 0. I don't know what platform you're working on, but on Linux, killed processes generally don't return a 0 if they're killed.
It could look something like this:
class Worker(threading.Thread):
def __init__(self, queue):
self.queue = queue
# more code here
def run(self):
while True:
try:
job = self.queue.get()
#do some work
subprocess.check_call("myprogram", timeout=my_timeout)
#do some more things
except (TimeoutExpired, subprocess.CalledProcessError):
#do some administration
self.queue.add(job)
Note that if you're using Python 3.5, you can use subprocess.run instead, and set the check argument to True.
If you have a strong need to handle the cases where the worker needs to be interrupted when it isn't running the subprocess, then I think you're going to have to use a polling loop, because I don't think the behavior you're looking for is supported for threads in Python. You can use a threading.Event object to pass the "stop working now" pseudo-signal from your main thread to the worker, and have the worker periodically check the state of that event object.
If you're willing to consider using multiple processing stead of threads, consider switching over to the multiprocessing module, which would allow you to handle signals. There is more overhead to spawning full-blown subprocesses instead of threads, but you're essentially looking for signal-like asynchronous behavior, and I don't think Python's threading library supports anything like that. One benefit though, would be that you would be freed from the Global Interpreter Lock(PDF link), so you may actually see some speed benefits if your worker processes (formerly threads) are doing anything CPU intensive.
I am writing a program that creates two new processes and must wait for them both to finish before continuing. How does one launch both processes and have the program wait for both to exit? Consider the pseudocode:
I currently have:
create_process("program1.exe").wait()
create_process("program2.exe").wait()
This is enificent as program2 can run concurently with program1.
create_process("program1.exe")
create_process("program2.exe").wait()
This may be wrong as program1 may take longer than program2.
I'm interested in a general solution, I bet there's algorithms or design patterns invented to deal with this stuff. But to add context to the question, I'm writing a Python script that calls pgsql2shp.exe twice, to export two tables from the database to the local machine and then preform an intersection. This script is written in Python 2.7 and uses subprocess.popen
How about using Threading?
If you spin up a couple of threads, each thread can run independently and you can join the threads when they are completed.
Try some code like this: (This code is heavily commented so that you can follow what's going on)
# Import threading
import threading
# Create a handler class.
# Each instance will run in it's own independent thread
class ThreadedHandler (threading.Thread):
# This method is called when you call threadInstance.start()
def run(self):
# Run your sub process and wait for it
# How you run your process is up to you
create_process(self.programName).wait()
# Create a new thread object
thread1 = ThreadedHandler()
# Set your program name so that when the thread is started
# the correct process is run
thread1.programName = 'program1.exe'
# Start the thread
thread1.start()
# Again, create a new thread object for the 2nd program
thread2 = ThreadedHandler()
# Set the program name
thread2.programName = 'program2.exe'
# Start the thread
thread2.start()
# At this point, each program is running independently in separate threads
# Each thread will wait for their respective sub process to complete
# Here, we join both threads. (Wait for both threads to complete)
thread1.join()
thread2.join()
# When we get here, both of our programs are finished and they both ran in parallel
Yet the thread module works for me. How to check if thread made by module thread (in Python 3 _thread) is running? When the function the thread is doing ends, the thread ends too, or doesn't?
def __init__(self):
self.thread =None
......
if self.thread==None or not self.thread.isAlive() :
self.thread = thread.start_new_thread(self.dosomething,())
else:
tkMessageBox.showwarning("XXXX","There's no need to have more than two threads")
I know there is no function called isAlive() in "thread" module, is there any alternative?
But there isn't any reason why to use "threading" module, is there?
Unless you really need the low-level capabilities of the internal thread (_thread module, you really should use the threading module instead. It makes everything easier to use and does come with helpers such as is_alive.
Btw. the alternative of restarting a thread like you do in your example code would be to keep it running but have it wait for additional jobs. E.g. you could have a queue somewhere which keeps track of all jobs you want the thread to do, and the thread keeps working on them until the queue is empty—and then it will not terminate but wait for new jobs to appear. And only at the end of the application, you signalize the thread to stop waiting and terminate it.