Usage of Semaphores in Python - python

I am trying to use a Semaphore in Python to block the execution of a Thread, and release the Semaphore in another Thread depending on some conditions (in my case receiving some content over tcp).
But it is not working, my main Thread is not blocked like I though it would. My minimal example:
import time
from threading import Thread, Semaphore
class ServerManager():
sem = Semaphore()
def run(self):
time.sleep(15)
self.sem.release()
manager = ServerManager()
thread = Thread(target = manager.run)
thread.start()
manager.sem.acquire()
print("I am here immediately but should not")
The message is printed immediately in the console but should only be printed after 15 seconds in my example. I'm sure I did something horribly wrong, but what?

You need to read the documentation about threading.Semaphore. A semaphore blocks when a thread tries to acquire and the counter is 0. When you create a Semaphore, the default count is 1, so your acquire is able to succeed immediately. I suspect you want
sem = Semaphore(0)
so the resource is immediately unavailable.

The answer from Tim Roberts is right.I read the Python doc, but I did not understand it. I thought (wrongly) that the default value had the behavior I wanted. The full code is:
I should do:
import time
from threading import Thread, Semaphore
class ServerManager():
sem = Semaphore(0)
def run(self):
time.sleep(15)
self.sem.release()
manager = ServerManager()
thread = Thread(target = manager.run)
thread.start()
manager.sem.acquire()
print("I am here immediately but should not")
The message is printed after 15 seconds.

Related

threading.get_ident() returns same ID between different threads when running pytest

I have created a small test case using pytest to demonstrate the issue:
from threading import Thread, get_ident
def test_threading():
threads = []
ids = []
def append_id():
# time.sleep(1)
ids.append(get_ident())
for _ in range(5):
thread = Thread(target=append_id)
threads.append(thread)
thread.start()
for thread in threads:
thread.join()
assert len(set(ids)) == 5
The test is failing because get_ident returns the same ID for different threads. But when I add time.sleep(1) in each thread, the test passes.
I'm not sure I understand why.
I'm using Python 3.9.0 and Pytest 7.1.2.
From the documentation of get_ident:
Return the ‘thread identifier’ of the current thread. This is a nonzero integer. Its value has no direct meaning; it is intended as a magic cookie to be used e.g. to index a dictionary of thread-specific data. Thread identifiers may be recycled when a thread exits and another thread is created.
Since your threads are running too quickly (without the time.sleep(1)), the ids are being recycled.
You can provide a name to each thread. This name does not have to be unique, but you can use it in your test (or in your application, if you need something that is unique in a context):
from threading import Thread, get_ident, current_thread
def test_threading():
threads = []
names = []
ids = []
def append_id():
# time.sleep(1)
ids.append(get_ident())
names.append(current_thread().name)
for i in range(5):
thread = Thread(target=append_id, name=f'Thread {i}')
threads.append(thread)
thread.start()
for thread in threads:
thread.join()
assert len(set(names)) == 5
print(f'Names: {names}')
print(f'Ids: {set(ids)}')

Why isn't my thread starting back up when the Condition object is notified/released?

This question pertains to Python 3.6.
I have a piece of code which has a thread from the threading library running, as well as the program's main thread. Both threads access an object that is not threadsafe, so I have them locking on a Condition object before using the object. The thread I spawned only needs to access/update that object once every 5 minutes, so there's a 5 minute sleep timer in the loop.
Currently, the main thread never gets a hold of the lock. When the second thread releases the Condition and starts waiting on the sleep() call, the main thread never wakes up/acquires the lock. It's as if the main thread has died.
class Loader:
def __init__(self, q):
...
self.queryLock = threading.Condition()
...
thread = Thread(target=self.threadfunc, daemon=True)
thread.start()
...
self.run()
def threadfunc(self):
...
while True:
self.queryLock.acquire()
[critical section #1]
self.queryLock.notify()
self.queryLock.release()
sleep(300)
def run(self):
...
while True:
...
self.queryLock.acquire()
[critical section #2]
self.queryLock.notify()
self.queryLock.release()
...
I believe you don't really need to use a Condition. It appears that a simple Lock would get the job done in your case. You don't actually verify that some condition is met and you don't use the Condition's special method wait().
That being said, regarding the code you provided, it seems that your main thread is too "quick", and re-acquires the lock before the other thread gets a chance.
Here is a slightly modified version of your code in which the main thread is waiting a bit, giving the other thread a chance, and it successfully acquires the lock and continue.
class Loader:
def __init__(self):
self.queryLock = threading.Condition()
thread = Thread(target=self.threadfunc, daemon=True)
thread.start()
self.run()
def threadfunc(self):
while True:
self.queryLock.acquire()
print("critical section 1")
time.sleep(1)
self.queryLock.notify()
self.queryLock.release()
time.sleep(5)
def run(self):
while True:
self.queryLock.acquire()
print("critical section 2")
time.sleep(2)
self.queryLock.notify()
self.queryLock.release()
print("main is waiting a bit")
time.sleep(1)
Loader()
Race conditions are fun :)

Why does my multiprocess queue not appear to be thread safe?

I am building a watchdog timer that runs another Python program, and if it fails to find a check-in from any of the threads, shuts down the whole program. This is so it will, eventually, be able to take control of needed communication ports. The code for the timer is as follows:
from multiprocessing import Process, Queue
from time import sleep
from copy import deepcopy
PATH_TO_FILE = r'.\test_program.py'
WATCHDOG_TIMEOUT = 2
class Watchdog:
def __init__(self, filepath, timeout):
self.filepath = filepath
self.timeout = timeout
self.threadIdQ = Queue()
self.knownThreads = {}
def start(self):
threadIdQ = self.threadIdQ
process = Process(target = self._executeFile)
process.start()
try:
while True:
unaccountedThreads = deepcopy(self.knownThreads)
# Empty queue since last wake. Add new thread IDs to knownThreads, and account for all known thread IDs
# in queue
while not threadIdQ.empty():
threadId = threadIdQ.get()
if threadId in self.knownThreads:
unaccountedThreads.pop(threadId, None)
else:
print('New threadId < {} > discovered'.format(threadId))
self.knownThreads[threadId] = False
# If there is a known thread that is unaccounted for, then it has either hung or crashed.
# Shut everything down.
if len(unaccountedThreads) > 0:
print('The following threads are unaccounted for:\n')
for threadId in unaccountedThreads:
print(threadId)
print('\nShutting down!!!')
break
else:
print('No unaccounted threads...')
sleep(self.timeout)
# Account for any exceptions thrown in the watchdog timer itself
except:
process.terminate()
raise
process.terminate()
def _executeFile(self):
with open(self.filepath, 'r') as f:
exec(f.read(), {'wdQueue' : self.threadIdQ})
if __name__ == '__main__':
wd = Watchdog(PATH_TO_FILE, WATCHDOG_TIMEOUT)
wd.start()
I also have a small program to test the watchdog functionality
from time import sleep
from threading import Thread
from queue import SimpleQueue
Q_TO_Q_DELAY = 0.013
class QToQ:
def __init__(self, processQueue, threadQueue):
self.processQueue = processQueue
self.threadQueue = threadQueue
Thread(name='queueToQueue', target=self._run).start()
def _run(self):
pQ = self.processQueue
tQ = self.threadQueue
while True:
while not tQ.empty():
sleep(Q_TO_Q_DELAY)
pQ.put(tQ.get())
def fastThread(q):
while True:
print('Fast thread, checking in!')
q.put('fastID')
sleep(0.5)
def slowThread(q):
while True:
print('Slow thread, checking in...')
q.put('slowID')
sleep(1.5)
def hangThread(q):
print('Hanging thread, checked in')
q.put('hangID')
while True:
pass
print('Hello! I am a program that spawns threads!\n\n')
threadQ = SimpleQueue()
Thread(name='fastThread', target=fastThread, args=(threadQ,)).start()
Thread(name='slowThread', target=slowThread, args=(threadQ,)).start()
Thread(name='hangThread', target=hangThread, args=(threadQ,)).start()
QToQ(wdQueue, threadQ)
As you can see, I need to have the threads put into a queue.Queue, while a separate object slowly feeds the output of the queue.Queue into the multiprocessing queue. If instead I have the threads put directly into the multiprocessing queue, or do not have the QToQ object sleep in between puts, the multiprocessing queue will lock up, and will appear to always be empty on the watchdog side.
Now, as the multiprocessing queue is supposed to be thread and process safe, I can only assume I have messed something up in the implementation. My solution seems to work, but also feels hacky enough that I feel I should fix it.
I am using Python 3.7.2, if it matters.
I suspect that test_program.py exits.
I changed the last few lines to this:
tq = threadQ
# tq = wdQueue # option to send messages direct to WD
t1 = Thread(name='fastThread', target=fastThread, args=(tq,))
t2 = Thread(name='slowThread', target=slowThread, args=(tq,))
t3 = Thread(name='hangThread', target=hangThread, args=(tq,))
t1.start()
t2.start()
t3.start()
QToQ(wdQueue, threadQ)
print('Joining with threads...')
t1.join()
t2.join()
t3.join()
print('test_program exit')
The calls to join() means that the test program never exits all by itself since none of the threads ever exit.
So, as is, t3 hangs and the watchdog program detects this and detects the unaccounted for thread and stops the test program.
If t3 is removed from the above program, then the other two threads are well behaved and the watchdog program allows the test program to continue indefinitely.

Threading and Conditions

I'm new to threading and I don't really understand how to use conditions. At the moment, I have a thread class like this:
class MusicThread(threading.Thread):
def __init__(self, song):
threading.Thread.__init__(self)
self.song = song
def run(self):
self.output = audiere.open_device()
self.music = self.output.open_file(self.song, 1)
self.music.play()
#i want the thread to wait indefinitely at this point until
#a condition/flag in the main thread is met/activated
In the main thread, the relevent code is:
music = MusicThread(thesong)
music.start()
What this should mean is that I can get a song playing through the secondary thread until I issue a command in the main thread to stop it. I'm guessing i'd have to use locks and wait() or something?
There is a much simpler solution here. You're using the Audiere library, which already plays audio in its own thread. Therefore, there is no need to spawn a second thread of your own just to play audio. Instead, use Audiere directly from the main thread, and stop it from the main thread.
Matt Campbell's answer is probably right. But maybe you want to use a thread for other reasons. If so, you may find a Queue.Queue very useful:
>>> import threading
>>> import Queue
>>> def queue_getter(input_queue):
... command = input_queue.get()
... while command != 'quit':
... print command
... command = input_queue.get()
...
>>> input_queue = Queue.Queue()
>>> command_thread = threading.Thread(target=queue_getter, args=(input_queue,))
>>> command_thread.start()
>>> input_queue.put('play')
>>> play
input_queue.put('pause')
pause
>>> input_queue.put('quit')
>>> command_thread.join()
command_thread does a blocking read on the queue, waiting for a command to be put on the queue. It continues to read and print commands off the queue as they are received until the 'quit' command is issued.

How can I invoke a thread multiple times in Python?

I'm sorry if it is a stupid question. I am trying to use a number of classes of multi-threading to finish different jobs, which involves invoking these multi-threadings at different times for many times. But I am not sure which method to use. The code looks like this:
class workers1(Thread):
def __init__(self):
Thread.__init__(self)
def run(self):
do some stuff
class workers2(Thread):
def __init__(self):
Thread.__init__(self)
def run(self):
do some stuff
class workers3(Thread):
def __init__(self):
Thread.__init__(self)
def run(self):
do some stuff
WorkerList1=[workers1(i) for i in range(X)]
WorkerList2=[workers2(i) for i in range(XX)]
WorkerList2=[workers3(i) for i in range(XXX)]
while True:
for thread in WorkerList1:
thread.run (start? join? or?)
for thread in WorkerList2:
thread.run (start? join? or?)
for thread in WorkerList3:
thread.run (start? join? or?)
do sth .
I am trying to have all the threads in all the WorkerList to start functioning at the same time, or at least start around the same time. After sometime once they were all terminated, I would like to invoke all the threads again.
If there were no loop, I can just use .start; but since I can only start a thread once, start apparently does not fit here. If I use run, it seems that all the threads start sequentially, not only the threads in the same list, but also threads from different lists.
Can anyone please help?
there are a lot of misconceptions here:
you can only start a specific instance of a thread once. but in your case, the for loop is looping over different instances of a thread, each instance being assigned to the variable thread in the loop, so there is no problem at all in calling the start() method over each thread. (you can think of it as if the variable thread is an alias of the Thread() object instantiated in your list)
run() is not the same as join(): calling run() performs as if you were programming sequentially. the run() method does not start a new thread, it simply execute the statements in in the method, as for any other function call.
join() does not start executing anything: it only waits for a thread to finish. in order for join() to work properly for a thread, you have to call start() on this thread first.
additionally, you should note that you cannot restart a thread once it has finished execution: you have to recreate the thread object for it to be started again. one workaround to get this working is to call Thread.__init__() at the end of the run() method. however, i would not recommend doing this since this will disallow the use of the join() method to detect the end of execution of the thread.
If you would call thread.start() in the loops, you would actually start every thread only once, because all the entries in your list are distinct thread objects (it does not matter they belong to the same class). You should never call the run() method of a thread directly -- it is meant to be called by the start() method. Calling it directly would not call it in a separate thread.
The code below creates a class that is just a thread but the start and calls the initialization of the Thread class again so that the thread doesn't know it has been called.
from threading import Thread
class MTThread(Thread):
def __init__(self, name = "", target = None):
self.mt_name = name
self.mt_target = target
Thread.__init__(self, name = name, target = target)
def start(self):
super().start()
Thread.__init__(self, name = self.mt_name, target = self.mt_target)
def run(self):
super().run()
Thread.__init__(self, name = self.mt_name, target = self.mt_target)
def code():
#Some code
thread = MTThread(name = "SomeThread", target = code)
thread.start()
thread.start()
I had this same dilemma and came up with this solution which has worked perfectly for me. It also allows a thread-killing decorator to be used efficiently.
The key feature is the use of a thread refresher which is instantiated and .started in main. This thread-refreshing thread will run a function that instantiates and starts all other (real, task-performing) threads. Decorating the thread-refreshing function with a thread-killer allows you to kill all threads when a certain condition is met, such as main terminating.
#ThreadKiller(arg) #qu'est-ce que c'est
def RefreshThreads():
threadTask1 = threading.Thread(name = "Task1", target = Task1, args = (anyArguments))
threadTask2 = threading.Thread(name = "Task2", target = Task2, args = (anyArguments))
threadTask1.start()
threadTask2.start()
#Main
while True:
#do stuff
threadRefreshThreads = threading.Thread(name = "RefreshThreads", target = RefreshThreads, args = ())
threadRefreshThreads.start()
from threading import Thread
from time import sleep
def runA():
while a==1:
print('A\n')
sleep(0.5)
if __name__ == "__main__":
a=1
t1 = Thread(target = runA)
t1.setDaemon(True)
t1.start()
sleep(2)
a=0
print(" now def runA stops")
sleep(3)
print("and now def runA continue")
a=1
t1 = Thread(target = runA)
t1.start()
sleep(2)

Categories