How to change argument value in a running thread in python - python

How do I change a parameter of a function running in an infinite loop in a thread (python)?
I am new to threading and python but this is what I want to do (simplified),
class myThread (threading.Thread):
def __init__(self, i):
threading.Thread.__init__(self)
def run(i):
self.blink(i)
def blink(i):
if i!=0:
if i==1:
speed=0.10
elif i==2:
speed=0.20
elif i==3:
speed=0.30
while(true):
print("speed\n")
i=3
blinkThread=myThread(i)
blinkThread.start()
while(i!=0):
i=input("Enter 0 to Exit or 1/2/3 to continue\n")
if i!=0:
blinkThread.run(i)
Now, obviously this code gives errors regarding the run() method. I want to run the function blink() in infinite loop but change the 'i' variable. I also cannot do it without a thread because I have other portions of code which are doing parallel tasks. What can I do?
Thanks!

Best thing to learn first, is to never change variables from different threads. Communicate over queues:
import threading
import queue
def drive(speed_queue):
speed = 1
while True:
try:
speed = speed_queue.get(timeout=1)
if speed == 0:
break
except queue.Empty:
pass
print("speed:", speed)
def main():
speed_queue = queue.Queue()
threading.Thread(target=drive, args=(speed_queue,)).start()
while True:
speed = int(input("Enter 0 to Exit or 1/2/3 to continue: "))
speed_queue.put(speed)
if speed == 0:
break
main()

Besides a lot of syntax errors, you're approaching the whole process wrong - there is no point in delegating the work from run to another method, but even if there was, the last while would loop infinitely (if it was actually written as while True:) never checking the speed change.
Also, don't use run() method to interface with your thread - it's a special method that gets called when starting the thread, you should handle your own updates separately.
You should also devote some time to learn OOP in Python as that's not how one makes a class.
Here's an example that does what you want, hope it might help you:
import threading
import time
class MyThread (threading.Thread):
def __init__(self, speed=0.1):
self._speed_cache = 0
self.speed = i
self.lock = threading.RLock()
super(MyThread, self).__init__()
def set_speed(self, speed): # you can use a proper setter if you want
with self.lock:
self.speed = speed
def run(self):
while True:
with self.lock:
if self.speed == 0:
print("Speed dropped to 0, exiting...")
break
# just so we don't continually print the speed, print only on change
if self.speed != self._speed_cache:
print("Current speed: {}".format(self.speed))
self._speed_cache = self.speed
time.sleep(0.1) # let it breathe
try:
input = raw_input # add for Python 2.6+ compatibility
except NameError:
pass
current_speed = 3 # initial speed
blink_thread = MyThread(current_speed)
blink_thread.start()
while current_speed != 0: # main loop until 0 speed is selected
time.sleep(0.1) # wait a little for an update
current_speed = int(input("Enter 0 to Exit or 1/2/3 to continue\n")) # add validation?
blink_thread.set_speed(current_speed)
Also, do note that threading is not executing anything in parallel - it uses GIL to switch between contexts but there are never two threads executing at absolutely the same time. Mutex (lock) in this sense is there just to ensure atomicity of operations, not actual exclusiveness.
If you need something to actually execute in parallel (if you have more than one core, that is), you'll need to use multiprocessing instead.

Related

Python sleep without blocking other processes

I am running a python script every hour and I've been using time.sleep(3600) inside of a while loop. It seems to work as needed but I am worried about it blocking new tasks. My research of this seems to be that it only blocks the current thread but I want to be 100% sure. While the hourly job shouldn't take more than 15min, if it does or if it hangs, I don't want it to block the next one that starts. This is how I've done it:
import threading
import time
def long_hourly_job():
# do some long task
pass
if __name__ == "__main__":
while True:
thr = threading.Thread(target=long_hourly_job)
thr.start()
time.sleep(3600)
Is this sufficient?
Also, the reason i am using time.sleep for this hourly job rather than a cron job is I want to do everything in code to make dockerization cleaner.
The code will work (ie: sleep does only block the calling thread), but you should be careful of some issues. Some of them have been already stated in the comments, like the possibility of time overlaps between threads. The main issue is that your code is slowly leaking resources. After creating a thread, the OS keeps some data structures even after the thread has finished running. This is necessary, for example to keep the thread's exit status until the thread's creator requires it. The function to clear these structures (conceptually equivalent to closing a file) is called join. A thread that has finished running and is not joined is termed a 'zombie thread'. The amount of memory required by these structures is very small, and your program should run for centuries for any reasonable amount of available RAM. Nevertheless, it is a good practice to join all the threads you create. A simple approach (if you know that 3600 s is more than enough time for the thread to finish) would be:
if __name__ == "__main__":
while True:
thr = threading.Thread(target=long_hourly_job)
thr.start()
thr.join(3600) # wait at most 3600 s for the thread to finish
if thr.isAlive(): # join does not return useful information
print("Ooops: the last job did not finish on time")
A better approach if you think that it is possible that sometimes 3600 s is not enough time for the thread to finish:
if __name__ == "__main__":
previous = []
while True:
thr = threading.Thread(target=long_hourly_job)
thr.start()
previous.append(thr)
time.sleep(3600)
for i in reversed(range(len(previous))):
t = previous[i]
t.join(0)
if t.isAlive():
print("Ooops: thread still running")
else:
print("Thread finished")
previous.remove(t)
I know that the print statement makes no sense: use logging instead.
Perhaps a little late. I tested the code from other answers but my main process got stuck (perhaps I'm doing something wrong?). I then tried a different approach. It's based on threading Timer class, but trying to emulate the QtCore.QTimer() behavior and features:
import threading
import time
import traceback
class Timer:
SNOOZE = 0
ONEOFF = 1
def __init__(self, timerType=SNOOZE):
self._timerType = timerType
self._keep = threading.Event()
self._timerSnooze = None
self._timerOneoff = None
class _SnoozeTimer(threading.Timer):
# This uses threading.Timer class, but consumes more CPU?!?!?!
def __init__(self, event, msec, callback, *args):
threading.Thread.__init__(self)
self.stopped = event
self.msec = msec
self.callback = callback
self.args = args
def run(self):
while not self.stopped.wait(self.msec):
self.callback(*self.args)
def start(self, msec: int, callback, *args, start_now=False) -> bool:
started = False
if msec > 0:
if self._timerType == self.SNOOZE:
if self._timerSnooze is None:
self._timerSnooze = self._SnoozeTimer(self._keep, msec / 1000, callback, *args)
self._timerSnooze.start()
if start_now:
callback(*args)
started = True
else:
if self._timerOneoff is None:
self._timerOneoff = threading.Timer(msec / 1000, callback, *args)
self._timerOneoff.start()
started = True
return started
def stop(self):
if self._timerType == self.SNOOZE:
self._keep.set()
self._timerSnooze.join()
else:
self._timerOneoff.cancel()
self._timerOneoff.join()
def is_alive(self):
if self._timerType == self.SNOOZE:
isAlive = self._timerSnooze is not None and self._timerSnooze.is_alive() and not self._keep.is_set()
else:
isAlive = self._timerOneoff is not None and self._timerOneoff.is_alive()
return isAlive
isAlive = is_alive
KEEP = True
def callback():
global KEEP
KEEP = False
print("ENDED", time.strftime("%M:%S"))
if __name__ == "__main__":
count = 0
t = Timer(timerType=Timer.ONEOFF)
t.start(5000, callback)
print("START", time.strftime("%M:%S"))
while KEEP:
if count % 10000000 == 0:
print("STILL RUNNING")
count += 1
Notice the while loop runs in a separate thread, and uses a callback function to invoke when the time is over (in your case, this callback function would be used to check if the long running process has finished).

Troubles with sharing values between different processes

I've different processes which waits for an event to occur (changing the state of a sensor)
I coded something like:
def Sensor_1():
wait_for_change_in_status
set counter to X
activate_LED_process.start()
def Sensor_2():
same function
def Sensor_3():
same function
def LED():
start LEDs
while counter > 0:
counter -= 1
time.sleep(1)
turn off LEDs
active_LED_process.join()
...
if __name__ == '__main__':
Sensor_1_process = multiprocessing.Process(target=Sensor_1)
Sensor_2_process = same
Sensor... you get it.
activate_LED_process = multiprocessing.Process(target=LED)
Now I'm stuck with the exchange of the counter value. Having different processes to be able to change the counter to a specific value.
Each sensor should be able to reset the value of the counter.
The LED process should be able to "countdown the counter" and react if the counter reached zero.
What be a proper solution? I read about values, arrays, pipes and queues.
For values and arrays I couldn't find a good documentation. Pipes seem to work only for two processes. And queues seem to not only hold one value (I'd compare a queue to a list - is this correct?)
import RPi.GPIO as GPIO
import time
import multiprocessing
import sys
GPIO.setmode(GPIO.BCM)
GPIO.setup(25, GPIO.IN, pull_up_down=GPIO.PUD_UP)
LED_time = 40 #time how long LEDs stay active (not important at this point)
def Sens_GT():
name = multiprocessing.current_process().name
print(name, 'Starting')
while True:
GPIO.wait_for_edge(25, GPIO.FALLING)
time.sleep(0.1)
print("Open")
LED_count = multiprocessing.value('i', 40) #For later: implementation of LED_time as a variable
print(LED_count) #For checking if the counter is set properly
"""
Missing code:
if "Process is already running":
go on
else:
Count_proc.start()
"""
print(name, 'Exiting') #Shouldn't happen because of the "while True:"
"""
Missing code:
def Sens_GAR():
def Sens_HT():
"""
def Count():
name = multiprocessing.current_process().name
print(name, 'Starting')
"""
Missing code:
Import counter value
"""
while countdown > 0:
print(countdown)
time.sleep(1)
LED_count -= 1
print(name, 'Exiting')
GPIO.cleanup() # clean up GPIO on normal exit
Count_proc.join()
sys.exit(1)
if __name__ == '__main__':
value_count = mutliprocessing.value('i', 0)
lock = Lock()
Sens_GT_proc = multiprocessing.Process(target=Sens_GT)
Count_proc = multiprocessing.Process(target=Count)
Sens_GT_proc.start()
Sens_GT_proc.join()
Value
seems to be a good choice for your use case.
However, you don't use it the right way.
After instanciating a value with multiprocessing.Value(), you have to pass the object as an arguments to your sub-processes, as shown in the multiprocessing guide.
So your code should be something like:
def Sens_GT(counter):
...
counter = 40
...
def Count(counter):
...
while counter > 0:
counter -= 1
time.sleep(1)
...
...
if __name__ == '__main__':
value_count = mutliprocessing.value('i', 0)
Sens_GT_proc = multiprocessing.Process(target=Sens_GT, args=(value_count,))
Count_proc = multiprocessing.Process(target=Count, args=(value_count,))
For me, pipes and queues are similar mechanisms, that are very useful in multi-processing contexts.
If you can probably use them in your case, I think they are more suited for data exchange (producers, consumers) than for shared state/value between processes.

How do I detect if a thread died, and then restart it?

I have an application that fires up a series of threads. Occassionally, one of these threads dies (usually due to a network problem). How can I properly detect a thread crash and restart just that thread? Here is example code:
import random
import threading
import time
class MyThread(threading.Thread):
def __init__(self, pass_value):
super(MyThread, self).__init__()
self.running = False
self.value = pass_value
def run(self):
self.running = True
while self.running:
time.sleep(0.25)
rand = random.randint(0,10)
print threading.current_thread().name, rand, self.value
if rand == 4:
raise ValueError('Returned 4!')
if __name__ == '__main__':
group1 = []
group2 = []
for g in range(4):
group1.append(MyThread(g))
group2.append(MyThread(g+20))
for m in group1:
m.start()
print "Now start second wave..."
for p in group2:
p.start()
In this example, I start 4 threads then I start 4 more threads. Each thread randomly generates an int between 0 and 10. If that int is 4, it raises an exception. Notice that I don't join the threads. I want both group1 and group2 list of threads to be running. I found that if I joined the threads it would wait until the thread terminated. My thread is supposed to be a daemon process, thus should rarely (if ever) hit the ValueError Exception this example code is showing and should be running constantly. By joining it, the next set of threads doesn't begin.
How can I detect that a specific thread died and restart just that one thread?
I have attempted the following loop right after my for p in group2 loop.
while True:
# Create a copy of our groups to iterate over,
# so that we can delete dead threads if needed
for m in group1[:]:
if not m.isAlive():
group1.remove(m)
group1.append(MyThread(1))
for m in group2[:]:
if not m.isAlive():
group2.remove(m)
group2.append(MyThread(500))
time.sleep(5.0)
I took this method from this question.
The problem with this, is that isAlive() seems to always return True, because the threads never restart.
Edit
Would it be more appropriate in this situation to use multiprocessing? I found this tutorial. Is it more appropriate to have separate processes if I am going to need to restart the process? It seems that restarting a thread is difficult.
It was mentioned in the comments that I should check is_active() against the thread. I don't see this mentioned in the documentation, but I do see the isAlive that I am currently using. As I mentioned above, though, this returns True, thus I'm never able to see that a thread as died.
I had a similar issue and stumbled across this question. I found that join takes a timeout argument, and that is_alive will return False once the thread is joined. So my audit for each thread is:
def check_thread_alive(thr):
thr.join(timeout=0.0)
return thr.is_alive()
This detects thread death for me.
You could potentially put in an a try except around where you expect it to crash (if it can be anywhere you can do it around the whole run function) and have an indicator variable which has its status.
So something like the following:
class MyThread(threading.Thread):
def __init__(self, pass_value):
super(MyThread, self).__init__()
self.running = False
self.value = pass_value
self.RUNNING = 0
self.FINISHED_OK = 1
self.STOPPED = 2
self.CRASHED = 3
self.status = self.STOPPED
def run(self):
self.running = True
self.status = self.RUNNING
while self.running:
time.sleep(0.25)
rand = random.randint(0,10)
print threading.current_thread().name, rand, self.value
try:
if rand == 4:
raise ValueError('Returned 4!')
except:
self.status = self.CRASHED
Then you can use your loop:
while True:
# Create a copy of our groups to iterate over,
# so that we can delete dead threads if needed
for m in group1[:]:
if m.status == m.CRASHED:
value = m.value
group1.remove(m)
group1.append(MyThread(value))
for m in group2[:]:
if m.status == m.CRASHED:
value = m.value
group2.remove(m)
group2.append(MyThread(value))
time.sleep(5.0)

Python threading and handing over values

Im trying to update threads which continuously run with new values every now and then.
class Test:
def __init__(self, num):
#testing reasons
self.num = num
def printloop(self, num):
self.num = num
#running is set to True sometime in the beginning
while running:
print(self.num)
time.sleep(3)
if not running:
print("finished")
def setnum(self, num):
self.num = num
I create threads like this:
t1 = threading.Thread(target=test.printloop,args=("1"))
This works and prints the proper arg.
But how can I update single threads with new values - if needed? Not all of the threads might need to be updated. The setnum method in my class there is obviously not working since it would update the value for all of the threads.
Do I need to limit the thread lifetime and join and wait for them to finish. Then recreating them with new values?
Or should I define a variable for each thread - how do I do that dynamically?
Or is there a better way im not seeing?
Thanks!
Edit:
I suppose i'll end up with something like:
test1 = Test(1)
..
test5 = Test(5)
t1 = threading.Thread(target=test1.printloop,args=("1"))
t5 = threading.Thread(target=test5.printloop,args=("5"))
and then use a method on each to set the Values?
For single integer values you can make Test a subclass of thread (and have run call printloop). Then other threads can call setnum safely. Due to the GIL and the fact that you are setting a single value this is safe, if you were doing a more complex update you would have to wrap setnum and the inner loop in printloop in a lock to prevent race conditions.
EDIT: A simple example
from threading import Thread
from time import sleep
class Output(Thread):
def __init__(self, num):
super(Output, self).__init__()
self.num = num
self.running = False
def run(self):
self.running = True
while self.running:
print self.num
sleep(1)
def stop(self):
self.running = False
def set_num(self, num):
self.num = num
output = Output(0)
output.start()
sleep(3)
output.set_num(1)
sleep(3)
output.stop()
output.join()

How to end program running after given time in Python

I'd like my Python program to run an algorithm for a given number of seconds and then to print the best result so far and to end.
What is the best way to do so?
I tried the following but it did not work(the program kept running after the printing):
def printBestResult(self):
print(self.bestResult)
sys.exit()
def findBestResult(self,time):
self.t = threading.Timer(time, self.printBestResult)
self.t.start()
while(1):
# find best result
Untested code, but something like this?
import time
threshold = 60
start = time.time()
best_run = threshold
while time.time()-start < threshold:
run_start = time.time()
doSomething()
run_time = time.time() - start
if run_time < best_run:
best_run = run_time
On unix, you can use signals -- This code times out after 1 second and counts how many times it iterates through the while loop in that time:
import signal
import sys
def handle_alarm(args):
print args.best_val
sys.exit()
class Foo(object):
pass
self=Foo() #some mutable object to mess with in the loop
self.best_val=0
signal.signal(signal.SIGALRM,lambda *args: handle_alarm(self))
signal.alarm(1) #timeout after 1 second
while True:
self.best_val+=1 # do something to mutate "self" here.
Or, you could easily have your alarm_handler raise an exception which you then catch outside the while loop, printing your best result.
If you want to do this with threads, a good way is to use an Event. Note that signal.alarm won't work in Windows, so I think threading is your best bet unless in that case.
import threading
import time
import random
class StochasticSearch(object):
def __init__(self):
self.halt_event = threading.Event()
def find_best_result(self, duration):
halt_thread = threading.Timer(duration, self.halt_event.set)
halt_thread.start()
best_result = 0
while not self.halt_event.is_set():
result = self.search()
best_result = result if result > best_result else best_result
time.sleep(0.5)
return best_result
def search(self):
val = random.randrange(0, 10000)
print 'searching for something; found {}'.format(val)
return val
print StochasticSearch().find_best_result(3)
You need an exit condition, or the program will run forever (or until it runs out of memory). Add one yourself.

Categories