Lock with timeout in Python2.7 - python

The accepted solution here doesn't work for all situations,
How to implement a Lock with a timeout in Python 2.7
(In particular the last thread who owns the lock calls cond.notify() when no one holds the conditional variable)
Then, I've tried a spin lock like this:
import threading
import time
class TimeLock(object):
def __init__(self):
self._lock = threading.Lock()
def acquire_lock(self, timeout = 0):
''' If timeout = 0, do a blocking lock
else, return False at [timeout] seconds
'''
if timeout == 0:
return self._lock.acquire() # Block for the lock
current_time = start_time = time.time()
while current_time < start_time + timeout:
if self._lock.acquire(False): # Contend for the lock, without blocking
return True
else:
time.sleep(1)
current_time = time.time()
# Time out
return False
def release_lock(self):
self._lock.release()
However after trying, the spin lock will almost always starve against the blocking lock.
Is there other solutions?

Turns out that python queues have a timeout feature in their
Queue module in 2.7
I can simulate a lock with time out by doing this
lock.acquire() -> Queue.get(block=True, timeout=timeout)
lock.release() -> Queue.put(1, block=False)

Related

Python sleep without blocking other processes

I am running a python script every hour and I've been using time.sleep(3600) inside of a while loop. It seems to work as needed but I am worried about it blocking new tasks. My research of this seems to be that it only blocks the current thread but I want to be 100% sure. While the hourly job shouldn't take more than 15min, if it does or if it hangs, I don't want it to block the next one that starts. This is how I've done it:
import threading
import time
def long_hourly_job():
# do some long task
pass
if __name__ == "__main__":
while True:
thr = threading.Thread(target=long_hourly_job)
thr.start()
time.sleep(3600)
Is this sufficient?
Also, the reason i am using time.sleep for this hourly job rather than a cron job is I want to do everything in code to make dockerization cleaner.
The code will work (ie: sleep does only block the calling thread), but you should be careful of some issues. Some of them have been already stated in the comments, like the possibility of time overlaps between threads. The main issue is that your code is slowly leaking resources. After creating a thread, the OS keeps some data structures even after the thread has finished running. This is necessary, for example to keep the thread's exit status until the thread's creator requires it. The function to clear these structures (conceptually equivalent to closing a file) is called join. A thread that has finished running and is not joined is termed a 'zombie thread'. The amount of memory required by these structures is very small, and your program should run for centuries for any reasonable amount of available RAM. Nevertheless, it is a good practice to join all the threads you create. A simple approach (if you know that 3600 s is more than enough time for the thread to finish) would be:
if __name__ == "__main__":
while True:
thr = threading.Thread(target=long_hourly_job)
thr.start()
thr.join(3600) # wait at most 3600 s for the thread to finish
if thr.isAlive(): # join does not return useful information
print("Ooops: the last job did not finish on time")
A better approach if you think that it is possible that sometimes 3600 s is not enough time for the thread to finish:
if __name__ == "__main__":
previous = []
while True:
thr = threading.Thread(target=long_hourly_job)
thr.start()
previous.append(thr)
time.sleep(3600)
for i in reversed(range(len(previous))):
t = previous[i]
t.join(0)
if t.isAlive():
print("Ooops: thread still running")
else:
print("Thread finished")
previous.remove(t)
I know that the print statement makes no sense: use logging instead.
Perhaps a little late. I tested the code from other answers but my main process got stuck (perhaps I'm doing something wrong?). I then tried a different approach. It's based on threading Timer class, but trying to emulate the QtCore.QTimer() behavior and features:
import threading
import time
import traceback
class Timer:
SNOOZE = 0
ONEOFF = 1
def __init__(self, timerType=SNOOZE):
self._timerType = timerType
self._keep = threading.Event()
self._timerSnooze = None
self._timerOneoff = None
class _SnoozeTimer(threading.Timer):
# This uses threading.Timer class, but consumes more CPU?!?!?!
def __init__(self, event, msec, callback, *args):
threading.Thread.__init__(self)
self.stopped = event
self.msec = msec
self.callback = callback
self.args = args
def run(self):
while not self.stopped.wait(self.msec):
self.callback(*self.args)
def start(self, msec: int, callback, *args, start_now=False) -> bool:
started = False
if msec > 0:
if self._timerType == self.SNOOZE:
if self._timerSnooze is None:
self._timerSnooze = self._SnoozeTimer(self._keep, msec / 1000, callback, *args)
self._timerSnooze.start()
if start_now:
callback(*args)
started = True
else:
if self._timerOneoff is None:
self._timerOneoff = threading.Timer(msec / 1000, callback, *args)
self._timerOneoff.start()
started = True
return started
def stop(self):
if self._timerType == self.SNOOZE:
self._keep.set()
self._timerSnooze.join()
else:
self._timerOneoff.cancel()
self._timerOneoff.join()
def is_alive(self):
if self._timerType == self.SNOOZE:
isAlive = self._timerSnooze is not None and self._timerSnooze.is_alive() and not self._keep.is_set()
else:
isAlive = self._timerOneoff is not None and self._timerOneoff.is_alive()
return isAlive
isAlive = is_alive
KEEP = True
def callback():
global KEEP
KEEP = False
print("ENDED", time.strftime("%M:%S"))
if __name__ == "__main__":
count = 0
t = Timer(timerType=Timer.ONEOFF)
t.start(5000, callback)
print("START", time.strftime("%M:%S"))
while KEEP:
if count % 10000000 == 0:
print("STILL RUNNING")
count += 1
Notice the while loop runs in a separate thread, and uses a callback function to invoke when the time is over (in your case, this callback function would be used to check if the long running process has finished).

how to terminate a thread from within another thread [duplicate]

How can I start and stop a thread with my poor thread class?
It is in loop, and I want to restart it again at the beginning of the code. How can I do start-stop-restart-stop-restart?
My class:
import threading
class Concur(threading.Thread):
def __init__(self):
self.stopped = False
threading.Thread.__init__(self)
def run(self):
i = 0
while not self.stopped:
time.sleep(1)
i = i + 1
In the main code, I want:
inst = Concur()
while conditon:
inst.start()
# After some operation
inst.stop()
# Some other operation
You can't actually stop and then restart a thread since you can't call its start() method again after its run() method has terminated. However you can make one pause and then later resume its execution by using a threading.Condition variable to avoid concurrency problems when checking or changing its running state.
threading.Condition objects have an associated threading.Lock object and methods to wait for it to be released and will notify any waiting threads when that occurs. Here's an example derived from the code in your question which shows this being done. In the example code I've made the Condition variable a part of Thread subclass instances to better encapsulate the implementation and avoid needing to introduce additional global variables:
from __future__ import print_function
import threading
import time
class Concur(threading.Thread):
def __init__(self):
super(Concur, self).__init__()
self.iterations = 0
self.daemon = True # Allow main to exit even if still running.
self.paused = True # Start out paused.
self.state = threading.Condition()
def run(self):
self.resume()
while True:
with self.state:
if self.paused:
self.state.wait() # Block execution until notified.
# Do stuff...
time.sleep(.1)
self.iterations += 1
def pause(self):
with self.state:
self.paused = True # Block self.
def resume(self):
with self.state:
self.paused = False
self.state.notify() # Unblock self if waiting.
class Stopwatch(object):
""" Simple class to measure elapsed times. """
def start(self):
""" Establish reference point for elapsed time measurements. """
self.start_time = time.time()
return self
#property
def elapsed_time(self):
""" Seconds since started. """
try:
return time.time() - self.start_time
except AttributeError: # Wasn't explicitly started.
self.start_time = time.time()
return 0
MAX_RUN_TIME = 5 # Seconds.
concur = Concur()
stopwatch = Stopwatch()
print('Running for {} seconds...'.format(MAX_RUN_TIME))
concur.start()
while stopwatch.elapsed_time < MAX_RUN_TIME:
concur.resume()
# Can also do other concurrent operations here...
concur.pause()
# Do some other stuff...
# Show Concur thread executed.
print('concur.iterations: {}'.format(concur.iterations))
This is David Heffernan's idea fleshed-out. The example below runs for 1 second, then stops for 1 second, then runs for 1 second, and so on.
import time
import threading
import datetime as DT
import logging
logger = logging.getLogger(__name__)
def worker(cond):
i = 0
while True:
with cond:
cond.wait()
logger.info(i)
time.sleep(0.01)
i += 1
logging.basicConfig(level=logging.DEBUG,
format='[%(asctime)s %(threadName)s] %(message)s',
datefmt='%H:%M:%S')
cond = threading.Condition()
t = threading.Thread(target=worker, args=(cond, ))
t.daemon = True
t.start()
start = DT.datetime.now()
while True:
now = DT.datetime.now()
if (now-start).total_seconds() > 60: break
if now.second % 2:
with cond:
cond.notify()
The implementation of stop() would look like this:
def stop(self):
self.stopped = True
If you want to restart, then you can just create a new instance and start that.
while conditon:
inst = Concur()
inst.start()
#after some operation
inst.stop()
#some other operation
The documentation for Thread makes it clear that the start() method can only be called once for each instance of the class.
If you want to pause and resume a thread, then you'll need to use a condition variable.

How to let a Python thread finish gracefully

I'm doing a project involving data collection and logging. I have 2 threads running, a collection thread and a logging thread, both started in main. I'm trying to allow the program to be terminated gracefully when with Ctrl-C.
I'm using a threading.Event to signal to the threads to end their respective loops. It works fine to stop the sim_collectData method, but it doesn't seem to be properly stopping the logData thread. The Collection terminated print statement is never executed, and the program just stalls. (It doesn't end, just sits there).
The second while loop in logData is to make sure everything in the queue is logged. The goal is for Ctrl-C to stop the collection thread immediately, then allow the logging thread to finish emptying the queue, and only then fully terminate the program. (Right now, the data is just being printed out - eventually it's going to be logged to a database).
I don't understand why the second thread never terminates. I'm basing what I've done on this answer: Stopping a thread after a certain amount of time. What am I missing?
def sim_collectData(input_queue, stop_event):
''' this provides some output simulating the serial
data from the data logging hardware.
'''
n = 0
while not stop_event.is_set():
input_queue.put("DATA: <here are some random data> " + str(n))
stop_event.wait(random.randint(0,5))
n += 1
print "Terminating data collection..."
return
def logData(input_queue, stop_event):
n = 0
# we *don't* want to loop based on queue size because the queue could
# theoretically be empty while waiting on some data.
while not stop_event.is_set():
d = input_queue.get()
if d.startswith("DATA:"):
print d
input_queue.task_done()
n += 1
# if the stop event is recieved and the previous loop terminates,
# finish logging the rest of the items in the queue.
print "Collection terminated. Logging remaining data to database..."
while not input_queue.empty():
d = input_queue.get()
if d.startswith("DATA:"):
print d
input_queue.task_done()
n += 1
return
def main():
input_queue = Queue.Queue()
stop_event = threading.Event() # used to signal termination to the threads
print "Starting data collection thread...",
collection_thread = threading.Thread(target=sim_collectData, args=(input_queue, stop_event))
collection_thread.start()
print "Done."
print "Starting logging thread...",
logging_thread = threading.Thread(target=logData, args=(input_queue, stop_event))
logging_thread.start()
print "Done."
try:
while True:
time.sleep(10)
except (KeyboardInterrupt, SystemExit):
# stop data collection. Let the logging thread finish logging everything in the queue
stop_event.set()
main()
The problem is that your logger is waiting on d = input_queue.get() and will not check the event. One solution is to skip the event completely and invent a unique message that tells the logger to stop. When you get a signal, send that message to the queue.
import threading
import Queue
import random
import time
def sim_collectData(input_queue, stop_event):
''' this provides some output simulating the serial
data from the data logging hardware.
'''
n = 0
while not stop_event.is_set():
input_queue.put("DATA: <here are some random data> " + str(n))
stop_event.wait(random.randint(0,5))
n += 1
print "Terminating data collection..."
input_queue.put(None)
return
def logData(input_queue):
n = 0
# we *don't* want to loop based on queue size because the queue could
# theoretically be empty while waiting on some data.
while True:
d = input_queue.get()
if d is None:
input_queue.task_done()
return
if d.startswith("DATA:"):
print d
input_queue.task_done()
n += 1
def main():
input_queue = Queue.Queue()
stop_event = threading.Event() # used to signal termination to the threads
print "Starting data collection thread...",
collection_thread = threading.Thread(target=sim_collectData, args=(input_queue, stop_event))
collection_thread.start()
print "Done."
print "Starting logging thread...",
logging_thread = threading.Thread(target=logData, args=(input_queue,))
logging_thread.start()
print "Done."
try:
while True:
time.sleep(10)
except (KeyboardInterrupt, SystemExit):
# stop data collection. Let the logging thread finish logging everything in the queue
stop_event.set()
main()
I'm not an expert in threading, but in your logData function the first d=input_queue.get() is blocking, i.e., if the queue is empty it will sit an wait forever until a queue message is received. This is likely why the logData thread never terminates, it's sitting waiting forever for a queue message.
Refer to the [Python docs] to change this to a non-blocking queue read: use .get(False) or .get_nowait() - but either will require some exception handling for cases when the queue is empty.
You are calling a blocking get on your input_queue with no timeout. In either section of logData, if you call input_queue.get() and the queue is empty, it will block indefinitely, preventing the logging_thread from reaching completion.
To fix, you will want to call input_queue.get_nowait() or pass a timeout to input_queue.get().
Here is my suggestion:
def logData(input_queue, stop_event):
n = 0
while not stop_event.is_set():
try:
d = input_queue.get_nowait()
if d.startswith("DATA:"):
print "LOG: " + d
n += 1
except Queue.Empty:
time.sleep(1)
return
You are also signalling the threads to terminate, but not waiting for them to do so. Consider doing this in your main function.
try:
while True:
time.sleep(10)
except (KeyboardInterrupt, SystemExit):
stop_event.set()
collection_thread.join()
logging_thread.join()
Based on the answer of tdelaney I created an iterator based approach. The iterator exits when the termination message is encountered. I also added a counter of how many get-calls are currently blocking and a stop-method, which sends just as many termination messages. To prevent a race condition between incrementing and reading the counter, I'm setting a stopping bit there. Furthermore I don't use None as the termination message, because it can not necessarily be compared to other data types when using a PriorityQueue.
There are two restrictions, that I had no need to eliminate. For one the stop-method first waits until the queue is empty before shutting down the threads. The second restriction is, that I did not any code to make the queue reusable after stop. The latter can probably be added quite easily, while the former requires being careful about concurrency and the context in which the code is used.
You have to decide whether you want stop to also wait for all the termination messages to be consumed. I choose to put the necessary join there, but you may just remove it.
So this is the code:
import threading, queue
from functools import total_ordering
#total_ordering
class Final:
def __repr__(self):
return "∞"
def __lt__(self, other):
return False
def __eq__(self, other):
return isinstance(other, Final)
Infty = Final()
class IterQueue(queue.Queue):
def __init__(self):
self.lock = threading.Lock()
self.stopped = False
self.getters = 0
super().__init__()
def __iter__(self):
return self
def get(self):
raise NotImplementedError("This queue may only be used as an iterator.")
def __next__(self):
with self.lock:
if self.stopped:
raise StopIteration
self.getters += 1
data = super().get()
if data == Infty:
self.task_done()
raise StopIteration
with self.lock:
self.getters -= 1
return data
def stop(self):
self.join()
self.stopped = True
with self.lock:
for i in range(self.getters):
self.put(Infty)
self.join()
class IterPriorityQueue(IterQueue, queue.PriorityQueue):
pass
Oh, and I wrote this in python 3.2. So after backporting,
import threading, Queue
from functools import total_ordering
#total_ordering
class Final:
def __repr__(self):
return "Infinity"
def __lt__(self, other):
return False
def __eq__(self, other):
return isinstance(other, Final)
Infty = Final()
class IterQueue(Queue.Queue, object):
def __init__(self):
self.lock = threading.Lock()
self.stopped = False
self.getters = 0
super(IterQueue, self).__init__()
def __iter__(self):
return self
def get(self):
raise NotImplementedError("This queue may only be used as an iterator.")
def next(self):
with self.lock:
if self.stopped:
raise StopIteration
self.getters += 1
data = super(IterQueue, self).get()
if data == Infty:
self.task_done()
raise StopIteration
with self.lock:
self.getters -= 1
return data
def stop(self):
self.join()
self.stopped = True
with self.lock:
for i in range(self.getters):
self.put(Infty)
self.join()
class IterPriorityQueue(IterQueue, Queue.PriorityQueue):
pass
you would use it as
import random
import time
def sim_collectData(input_queue, stop_event):
''' this provides some output simulating the serial
data from the data logging hardware.
'''
n = 0
while not stop_event.is_set():
input_queue.put("DATA: <here are some random data> " + str(n))
stop_event.wait(random.randint(0,5))
n += 1
print "Terminating data collection..."
return
def logData(input_queue):
n = 0
# we *don't* want to loop based on queue size because the queue could
# theoretically be empty while waiting on some data.
for d in input_queue:
if d.startswith("DATA:"):
print d
input_queue.task_done()
n += 1
def main():
input_queue = IterQueue()
stop_event = threading.Event() # used to signal termination to the threads
print "Starting data collection thread...",
collection_thread = threading.Thread(target=sim_collectData, args=(input_queue, stop_event))
collection_thread.start()
print "Done."
print "Starting logging thread...",
logging_thread = threading.Thread(target=logData, args=(input_queue,))
logging_thread.start()
print "Done."
try:
while True:
time.sleep(10)
except (KeyboardInterrupt, SystemExit):
# stop data collection. Let the logging thread finish logging everything in the queue
stop_event.set()
input_queue.stop()
main()

Python threading: can I sleep on two threading.Event()s simultaneously?

If I have two threading.Event() objects, and wish to sleep until either one of them is set, is there an efficient way to do that in python? Clearly I could do something with polling/timeouts, but I would like to really have the thread sleep until one is set, akin to how select is used for file descriptors.
So in the following implementation, what would an efficient non-polling implementation of wait_for_either look like?
a = threading.Event()
b = threading.Event()
wait_for_either(a, b)
Here is a non-polling non-excessive thread solution: modify the existing Events to fire a callback whenever they change, and handle setting a new event in that callback:
import threading
def or_set(self):
self._set()
self.changed()
def or_clear(self):
self._clear()
self.changed()
def orify(e, changed_callback):
e._set = e.set
e._clear = e.clear
e.changed = changed_callback
e.set = lambda: or_set(e)
e.clear = lambda: or_clear(e)
def OrEvent(*events):
or_event = threading.Event()
def changed():
bools = [e.is_set() for e in events]
if any(bools):
or_event.set()
else:
or_event.clear()
for e in events:
orify(e, changed)
changed()
return or_event
Sample usage:
def wait_on(name, e):
print "Waiting on %s..." % (name,)
e.wait()
print "%s fired!" % (name,)
def test():
import time
e1 = threading.Event()
e2 = threading.Event()
or_e = OrEvent(e1, e2)
threading.Thread(target=wait_on, args=('e1', e1)).start()
time.sleep(0.05)
threading.Thread(target=wait_on, args=('e2', e2)).start()
time.sleep(0.05)
threading.Thread(target=wait_on, args=('or_e', or_e)).start()
time.sleep(0.05)
print "Firing e1 in 2 seconds..."
time.sleep(2)
e1.set()
time.sleep(0.05)
print "Firing e2 in 2 seconds..."
time.sleep(2)
e2.set()
time.sleep(0.05)
The result of which was:
Waiting on e1...
Waiting on e2...
Waiting on or_e...
Firing e1 in 2 seconds...
e1 fired!or_e fired!
Firing e2 in 2 seconds...
e2 fired!
This should be thread-safe. Any comments are welcome.
EDIT: Oh and here is your wait_for_either function, though the way I wrote the code, it's best to make and pass around an or_event. Note that the or_event shouldn't be set or cleared manually.
def wait_for_either(e1, e2):
OrEvent(e1, e2).wait()
I think the standard library provides a pretty canonical solution to this problem that I don't see brought up in this question: condition variables. You have your main thread wait on a condition variable, and poll the set of events each time it is notified. It is only notified when one of the events is updated, so there is no wasteful polling. Here is a Python 3 example:
from threading import Thread, Event, Condition
from time import sleep
from random import random
event1 = Event()
event2 = Event()
cond = Condition()
def thread_func(event, i):
delay = random()
print("Thread {} sleeping for {}s".format(i, delay))
sleep(delay)
event.set()
with cond:
cond.notify()
print("Thread {} done".format(i))
with cond:
Thread(target=thread_func, args=(event1, 1)).start()
Thread(target=thread_func, args=(event2, 2)).start()
print("Threads started")
while not (event1.is_set() or event2.is_set()):
print("Entering cond.wait")
cond.wait()
print("Exited cond.wait ({}, {})".format(event1.is_set(), event2.is_set()))
print("Main thread done")
Example output:
Thread 1 sleeping for 0.31569427100177794s
Thread 2 sleeping for 0.486548134317051s
Threads started
Entering cond.wait
Thread 1 done
Exited cond.wait (True, False)
Main thread done
Thread 2 done
Note that wit no extra threads or unnecessary polling, you can wait for an arbitrary predicate to become true (e.g. for any particular subset of the events to be set). There's also a wait_for wrapper for the while (pred): cond.wait() pattern, which can make your code a bit easier to read.
One solution (with polling) would be to do sequential waits on each Event in a loop
def wait_for_either(a, b):
while True:
if a.wait(tunable_timeout):
break
if b.wait(tunable_timeout):
break
I think that if you tune the timeout well enough the results would be OK.
The best non-polling I can think of is to wait for each one in a different thread and set a shared Event whom you will wait after in the main thread.
def repeat_trigger(waiter, trigger):
waiter.wait()
trigger.set()
def wait_for_either(a, b):
trigger = threading.Event()
ta = threading.Thread(target=repeat_trigger, args=(a, trigger))
tb = threading.Thread(target=repeat_trigger, args=(b, trigger))
ta.start()
tb.start()
# Now do the union waiting
trigger.wait()
Pretty interesting, so I wrote an OOP version of the previous solution:
class EventUnion(object):
"""Register Event objects and wait for release when any of them is set"""
def __init__(self, ev_list=None):
self._trigger = Event()
if ev_list:
# Make a list of threads, one for each Event
self._t_list = [
Thread(target=self._triggerer, args=(ev, ))
for ev in ev_list
]
else:
self._t_list = []
def register(self, ev):
"""Register a new Event"""
self._t_list.append(Thread(target=self._triggerer, args=(ev, )))
def wait(self, timeout=None):
"""Start waiting until any one of the registred Event is set"""
# Start all the threads
map(lambda t: t.start(), self._t_list)
# Now do the union waiting
return self._trigger.wait(timeout)
def _triggerer(self, ev):
ev.wait()
self._trigger.set()
This is an old question, but I hope this helps someone coming from Google.
The accepted answer is fairly old and will cause an infinite loop for twice-"orified" events.
Here is an implementation using concurrent.futures
import concurrent.futures
from concurrent.futures import ThreadPoolExecutor
def wait_for_either(events, timeout=None, t_pool=None):
'''blocks untils one of the events gets set
PARAMETERS
events (list): list of threading.Event objects
timeout (float): timeout for events (used for polling)
t_pool (concurrent.futures.ThreadPoolExecutor): optional
'''
if any(event.is_set() for event in events):
# sanity check
pass
else:
t_pool = t_pool or ThreadPoolExecutor(max_workers=len(events))
tasks = []
for event in events:
tasks.append(t_pool.submit(event.wait))
concurrent.futures.wait(tasks, timeout=timeout, return_when='FIRST_COMPLETED')
# cleanup
for task in tasks:
try:
task.result(timeout=0)
except concurrent.futures.TimeoutError:
pass
Testing the function
import threading
import time
from datetime import datetime, timedelta
def bomb(myevent, sleep_s):
'''set event after sleep_s seconds'''
with lock:
print('explodes in ', datetime.now() + timedelta(seconds=sleep_s))
time.sleep(sleep_s)
myevent.set()
with lock:
print('BOOM!')
lock = threading.RLock() # so prints don't get jumbled
a = threading.Event()
b = threading.Event()
t_pool = ThreadPoolExecutor(max_workers=2)
threading.Thread(target=bomb, args=(event1, 5), daemon=True).start()
threading.Thread(target=bomb, args=(event2, 120), daemon=True).start()
with lock:
print('1 second timeout, no ThreadPool', datetime.now())
wait_for_either([a, b], timeout=1)
with lock:
print('wait_event_or done', datetime.now())
print('=' * 15)
with lock:
print('wait for event1', datetime.now())
wait_for_either([a, b], t_pool=t_pool)
with lock:
print('wait_event_or done', datetime.now())
Starting extra threads seems a clear solution, not very effecient though.
Function wait_events will block util any one of events is set.
def wait_events(*events):
event_share = Event()
def set_event_share(event):
event.wait()
event.clear()
event_share.set()
for event in events:
Thread(target=set_event_share(event)).start()
event_share.wait()
wait_events(event1, event2, event3)
Extending Claudiu's answer where you can either wait for:
event 1 OR event 2
event 1 AND even 2
from threading import Thread, Event, _Event
class ConditionalEvent(_Event):
def __init__(self, events_list, condition):
_Event.__init__(self)
self.event_list = events_list
self.condition = condition
for e in events_list:
self._setup(e, self._state_changed)
self._state_changed()
def _state_changed(self):
bools = [e.is_set() for e in self.event_list]
if self.condition == 'or':
if any(bools):
self.set()
else:
self.clear()
elif self.condition == 'and':
if all(bools):
self.set()
else:
self.clear()
def _custom_set(self,e):
e._set()
e._state_changed()
def _custom_clear(self,e):
e._clear()
e._state_changed()
def _setup(self, e, changed_callback):
e._set = e.set
e._clear = e.clear
e._state_changed = changed_callback
e.set = lambda: self._custom_set(e)
e.clear = lambda: self._custom_clear(e)
Example usage will be very similar as before
import time
e1 = Event()
e2 = Event()
# Example to wait for triggering of event 1 OR event 2
or_e = ConditionalEvent([e1, e2], 'or')
# Example to wait for triggering of event 1 AND event 2
and_e = ConditionalEvent([e1, e2], 'and')
Not pretty, but you can use two additional threads to multiplex the events...
def wait_for_either(a, b):
flag = False #some condition variable, event, or similar
class Event_Waiter(threading.Thread):
def __init__(self, event):
self.e = event
def run(self):
self.e.wait()
flag.set()
a_thread = Event_Waiter(a)
b_thread = Event_Waiter(b)
a.start()
b.start()
flag.wait()
Note, you may have to worry about accidentally getting both events if they arrive too quickly. The helper threads (a_thread and b_thread) should lock synchronize around trying to set flag and then should kill the other thread (possibly resetting that thread's event if it was consumed).
def wait_for_event_timeout(*events):
while not all([e.isSet() for e in events]):
#Check to see if the event is set. Timeout 1 sec.
ev_wait_bool=[e.wait(1) for e in events]
# Process if all events are set. Change all to any to process if any event set
if all(ev_wait_bool):
logging.debug('processing event')
else:
logging.debug('doing other work')
e1 = threading.Event()
e2 = threading.Event()
t3 = threading.Thread(name='non-block-multi',
target=wait_for_event_timeout,
args=(e1,e2))
t3.start()
logging.debug('Waiting before calling Event.set()')
time.sleep(5)
e1.set()
time.sleep(10)
e2.set()
logging.debug('Event is set')

kill a function after a certain time in windows

I've read a lot of posts about using threads, subprocesses, etc.. A lot of it seems over complicated for what I'm trying to do...
All I want to do is stop executing a function after X amount of time has elapsed.
def big_loop(bob):
x = bob
start = time.time()
while True:
print time.time()-start
This function is an endless loop that never throws any errors or exceptions, period.
I"m not sure the difference between "commands, shells, subprocesses, threads, etc.." and this function, which is why I'm having trouble manipulating subprocesses.
I found this code here, and tried it but as you can see it keeps printing after 10 seconds have elapsed:
import time
import threading
import subprocess as sub
import time
class RunCmd(threading.Thread):
def __init__(self, cmd, timeout):
threading.Thread.__init__(self)
self.cmd = cmd
self.timeout = timeout
def run(self):
self.p = sub.Popen(self.cmd)
self.p.wait()
def Run(self):
self.start()
self.join(self.timeout)
if self.is_alive():
self.p.terminate()
self.join()
def big_loop(bob):
x = bob
start = time.time()
while True:
print time.time()-start
RunCmd(big_loop('jimijojo'), 10).Run() #supposed to quit after 10 seconds, but doesn't
x = raw_input('DONEEEEEEEEEEEE')
What's a simple way this function can be killed. As you can see in my attempt above, it doesn't terminate after 20 seconds and just keeps on going...
***OH also, I've read about using signal, but I"m on windows so I can't use the alarm feature.. (python 2.7)
**assume the "infinitely running function" can't be manipulated or changed to be non-infinite, if I could change the function, well I'd just change it to be non infinite wouldn't I?
Here are some similar questions, which I haven't able to port over their code to work with my simple function:
Perhaps you can?
Python: kill or terminate subprocess when timeout
signal.alarm replacement in Windows [Python]
Ok I tried an answer I received, it works.. but how can I use it if I remove the if __name__ == "__main__": statement? When I remove this statement, the loop never ends as it did before..
import multiprocessing
import Queue
import time
def infinite_loop_function(bob):
var = bob
start = time.time()
while True:
time.sleep(1)
print time.time()-start
print 'this statement will never print'
def wrapper(queue, bob):
result = infinite_loop_function(bob)
queue.put(result)
queue.close()
#if __name__ == "__main__":
queue = multiprocessing.Queue(1) # Maximum size is 1
proc = multiprocessing.Process(target=wrapper, args=(queue, 'var'))
proc.start()
# Wait for TIMEOUT seconds
try:
timeout = 10
result = queue.get(True, timeout)
except Queue.Empty:
# Deal with lack of data somehow
result = None
finally:
proc.terminate()
print 'running other code, now that that infinite loop has been defeated!'
print 'bla bla bla'
x = raw_input('done')
Use the building blocks in the multiprocessing module:
import multiprocessing
import Queue
TIMEOUT = 5
def big_loop(bob):
import time
time.sleep(4)
return bob*2
def wrapper(queue, bob):
result = big_loop(bob)
queue.put(result)
queue.close()
def run_loop_with_timeout():
bob = 21 # Whatever sensible value you need
queue = multiprocessing.Queue(1) # Maximum size is 1
proc = multiprocessing.Process(target=wrapper, args=(queue, bob))
proc.start()
# Wait for TIMEOUT seconds
try:
result = queue.get(True, TIMEOUT)
except Queue.Empty:
# Deal with lack of data somehow
result = None
finally:
proc.terminate()
# Process data here, not in try block above, otherwise your process keeps running
print result
if __name__ == "__main__":
run_loop_with_timeout()
You could also accomplish this with a Pipe/Connection pair, but I'm not familiar with their API. Change the sleep time or TIMEOUT to check the behaviour for either case.
There is no straightforward way to kill a function after a certain amount of time without running the function in a separate process. A better approach would probably be to rewrite the function so that it returns after a specified time:
import time
def big_loop(bob, timeout):
x = bob
start = time.time()
end = start + timeout
while time.time() < end:
print time.time() - start
# Do more stuff here as needed
Can't you just return from the loop?
start = time.time()
endt = start + 30
while True:
now = time.time()
if now > endt:
return
else:
print end - start
import os,signal,time
cpid = os.fork()
if cpid == 0:
while True:
# do stuff
else:
time.sleep(10)
os.kill(cpid, signal.SIGKILL)
You can also check in the loop of a thread for an event, which is more portable and flexible as it allows other reactions than brute killing. However, this approach fails if # do stuff can take time (or even wait forever on some event).

Categories