Troubles with sharing values between different processes - python

I've different processes which waits for an event to occur (changing the state of a sensor)
I coded something like:
def Sensor_1():
wait_for_change_in_status
set counter to X
activate_LED_process.start()
def Sensor_2():
same function
def Sensor_3():
same function
def LED():
start LEDs
while counter > 0:
counter -= 1
time.sleep(1)
turn off LEDs
active_LED_process.join()
...
if __name__ == '__main__':
Sensor_1_process = multiprocessing.Process(target=Sensor_1)
Sensor_2_process = same
Sensor... you get it.
activate_LED_process = multiprocessing.Process(target=LED)
Now I'm stuck with the exchange of the counter value. Having different processes to be able to change the counter to a specific value.
Each sensor should be able to reset the value of the counter.
The LED process should be able to "countdown the counter" and react if the counter reached zero.
What be a proper solution? I read about values, arrays, pipes and queues.
For values and arrays I couldn't find a good documentation. Pipes seem to work only for two processes. And queues seem to not only hold one value (I'd compare a queue to a list - is this correct?)
import RPi.GPIO as GPIO
import time
import multiprocessing
import sys
GPIO.setmode(GPIO.BCM)
GPIO.setup(25, GPIO.IN, pull_up_down=GPIO.PUD_UP)
LED_time = 40 #time how long LEDs stay active (not important at this point)
def Sens_GT():
name = multiprocessing.current_process().name
print(name, 'Starting')
while True:
GPIO.wait_for_edge(25, GPIO.FALLING)
time.sleep(0.1)
print("Open")
LED_count = multiprocessing.value('i', 40) #For later: implementation of LED_time as a variable
print(LED_count) #For checking if the counter is set properly
"""
Missing code:
if "Process is already running":
go on
else:
Count_proc.start()
"""
print(name, 'Exiting') #Shouldn't happen because of the "while True:"
"""
Missing code:
def Sens_GAR():
def Sens_HT():
"""
def Count():
name = multiprocessing.current_process().name
print(name, 'Starting')
"""
Missing code:
Import counter value
"""
while countdown > 0:
print(countdown)
time.sleep(1)
LED_count -= 1
print(name, 'Exiting')
GPIO.cleanup() # clean up GPIO on normal exit
Count_proc.join()
sys.exit(1)
if __name__ == '__main__':
value_count = mutliprocessing.value('i', 0)
lock = Lock()
Sens_GT_proc = multiprocessing.Process(target=Sens_GT)
Count_proc = multiprocessing.Process(target=Count)
Sens_GT_proc.start()
Sens_GT_proc.join()

Value
seems to be a good choice for your use case.
However, you don't use it the right way.
After instanciating a value with multiprocessing.Value(), you have to pass the object as an arguments to your sub-processes, as shown in the multiprocessing guide.
So your code should be something like:
def Sens_GT(counter):
...
counter = 40
...
def Count(counter):
...
while counter > 0:
counter -= 1
time.sleep(1)
...
...
if __name__ == '__main__':
value_count = mutliprocessing.value('i', 0)
Sens_GT_proc = multiprocessing.Process(target=Sens_GT, args=(value_count,))
Count_proc = multiprocessing.Process(target=Count, args=(value_count,))
For me, pipes and queues are similar mechanisms, that are very useful in multi-processing contexts.
If you can probably use them in your case, I think they are more suited for data exchange (producers, consumers) than for shared state/value between processes.

Related

How to change argument value in a running thread in python

How do I change a parameter of a function running in an infinite loop in a thread (python)?
I am new to threading and python but this is what I want to do (simplified),
class myThread (threading.Thread):
def __init__(self, i):
threading.Thread.__init__(self)
def run(i):
self.blink(i)
def blink(i):
if i!=0:
if i==1:
speed=0.10
elif i==2:
speed=0.20
elif i==3:
speed=0.30
while(true):
print("speed\n")
i=3
blinkThread=myThread(i)
blinkThread.start()
while(i!=0):
i=input("Enter 0 to Exit or 1/2/3 to continue\n")
if i!=0:
blinkThread.run(i)
Now, obviously this code gives errors regarding the run() method. I want to run the function blink() in infinite loop but change the 'i' variable. I also cannot do it without a thread because I have other portions of code which are doing parallel tasks. What can I do?
Thanks!
Best thing to learn first, is to never change variables from different threads. Communicate over queues:
import threading
import queue
def drive(speed_queue):
speed = 1
while True:
try:
speed = speed_queue.get(timeout=1)
if speed == 0:
break
except queue.Empty:
pass
print("speed:", speed)
def main():
speed_queue = queue.Queue()
threading.Thread(target=drive, args=(speed_queue,)).start()
while True:
speed = int(input("Enter 0 to Exit or 1/2/3 to continue: "))
speed_queue.put(speed)
if speed == 0:
break
main()
Besides a lot of syntax errors, you're approaching the whole process wrong - there is no point in delegating the work from run to another method, but even if there was, the last while would loop infinitely (if it was actually written as while True:) never checking the speed change.
Also, don't use run() method to interface with your thread - it's a special method that gets called when starting the thread, you should handle your own updates separately.
You should also devote some time to learn OOP in Python as that's not how one makes a class.
Here's an example that does what you want, hope it might help you:
import threading
import time
class MyThread (threading.Thread):
def __init__(self, speed=0.1):
self._speed_cache = 0
self.speed = i
self.lock = threading.RLock()
super(MyThread, self).__init__()
def set_speed(self, speed): # you can use a proper setter if you want
with self.lock:
self.speed = speed
def run(self):
while True:
with self.lock:
if self.speed == 0:
print("Speed dropped to 0, exiting...")
break
# just so we don't continually print the speed, print only on change
if self.speed != self._speed_cache:
print("Current speed: {}".format(self.speed))
self._speed_cache = self.speed
time.sleep(0.1) # let it breathe
try:
input = raw_input # add for Python 2.6+ compatibility
except NameError:
pass
current_speed = 3 # initial speed
blink_thread = MyThread(current_speed)
blink_thread.start()
while current_speed != 0: # main loop until 0 speed is selected
time.sleep(0.1) # wait a little for an update
current_speed = int(input("Enter 0 to Exit or 1/2/3 to continue\n")) # add validation?
blink_thread.set_speed(current_speed)
Also, do note that threading is not executing anything in parallel - it uses GIL to switch between contexts but there are never two threads executing at absolutely the same time. Mutex (lock) in this sense is there just to ensure atomicity of operations, not actual exclusiveness.
If you need something to actually execute in parallel (if you have more than one core, that is), you'll need to use multiprocessing instead.

Threading and target function in external file (python)

I want to move some functions to an external file for making it clearer.
lets say i have this example code (which does indeed work):
import threading
from time import sleep
testVal = 0
def testFunc():
while True:
global testVal
sleep(1)
testVal = testVal + 1
print(testVal)
t = threading.Thread(target=testFunc, args=())
t.daemon = True
t.start()
try:
while True:
sleep(2)
print('testval = ' + str(testVal))
except KeyboardInterrupt:
pass
now i want to move testFunc() to a new python file. My guess was the following but the global variables don't seem to be the same.
testserver.py:
import threading
import testclient
from time import sleep
testVal = 0
t = threading.Thread(target=testclient.testFunc, args=())
t.daemon = True
t.start()
try:
while True:
sleep(2)
print('testval = ' + str(testVal))
except KeyboardInterrupt:
pass
and testclient.py:
from time import sleep
from testserver import testVal as val
def testFunc():
while True:
global val
sleep(1)
val = val + 1
print(val)
my output is:
1
testval = 0
2
3
testval = 0 (testval didn't change)
...
while it should:
1
testval = 1
2
3
testval = 3
...
any suggestions? Thanks!
Your immediate problem is not due to multithreading (we'll get to that) but due to how you use global variables. The thing is, when you use this:
from testserver import testVal as val
You're essentially doing this:
import testserver
val = testserver.testVal
i.e. you're creating a local reference val that points to the testserver.testVal value. This is all fine and dandy when you read it (the first time at least) but when you try to assign its value in your function with:
val = val + 1
You're actually re-assigning the local (to testclient.py) val variable, not setting the value of testserver.testVal. You have to directly reference the actual pointer (i.e. testserver.testVal += 1) if you want to change its value.
That being said, the next problem you might encounter might stem directly from multithreading - you can encounter a race-condition oddity where GIL pauses one thread right after reading the value, but before actually writing it, and the next thread reading it and overwriting the current value, then the first thread resumes and writes the same value resulting in single increase despite two calls. You need to use some sort of mutex to make sure that all non-atomic operations execute exclusively to one thread if you want to use your data this way. The easiest way to do it is with a Lock that comes with the threading module:
testserver.py:
# ...
testVal = 0
testValLock = threading.Lock()
# ...
testclient.py:
# ...
with testserver.testValLock:
testserver.testVal += 1
# ...
A third and final problem you might encounter is a circular dependency (testserver.py requires testclient.py, which requires testserver.py) and I'd advise you to re-think the way you want to approach this problem. If all you want is a common global store - create it separately from modules that might depend on it. That way you ensure proper loading and initializing order without the danger of unresolveable circular dependencies.

When I'm testing about multiprocessing and threading with python, and I meet a odd situation

I am using process pools(including 3 processes). In every process, I have set (created) some threads by using the thread classes to speed handle something.
At first, everything was OK. But when I wanted to change some variable in a thread, I met an odd situation.
For testing or to know what happens, I set a global variable COUNT to test. Honestly, I don't know this is safe or not. I just want to see, by using multiprocessing and threading can I change COUNT or not?
#!/usr/bin/env python
# encoding: utf-8
import os
import threading
from Queue import Queue
from multiprocessing import Process, Pool
# global variable
max_threads = 11
Stock_queue = Queue()
COUNT = 0
class WorkManager:
def __init__(self, work_queue_size=1, thread_pool_size=1):
self.work_queue = Queue()
self.thread_pool = [] # initiate, no have a thread
self.work_queue_size = work_queue_size
self.thread_pool_size = thread_pool_size
self.__init_work_queue()
self.__init_thread_pool()
def __init_work_queue(self):
for i in xrange(self.work_queue_size):
self.work_queue.put((func_test, Stock_queue.get()))
def __init_thread_pool(self):
for i in xrange(self.thread_pool_size):
self.thread_pool.append(WorkThread(self.work_queue))
def finish_all_threads(self):
for i in xrange(self.thread_pool_size):
if self.thread_pool[i].is_alive():
self.thread_pool[i].join()
class WorkThread(threading.Thread):
def __init__(self, work_queue):
threading.Thread.__init__(self)
self.work_queue = work_queue
self.start()
def run(self):
while self.work_queue.qsize() > 0:
try:
func, args = self.work_queue.get(block=False)
func(args)
except Queue.Empty:
print 'queue is empty....'
def handle(process_name):
print process_name, 'is running...'
work_manager = WorkManager(Stock_queue.qsize()/3, max_threads)
work_manager.finish_all_threads()
def func_test(num):
# use a global variable to test what happens
global COUNT
COUNT += num
def prepare():
# prepare test queue, store 50 numbers in Stock_queue
for i in xrange(50):
Stock_queue.put(i)
def main():
prepare()
pools = Pool()
# set 3 process
for i in xrange(3):
pools.apply_async(handle, args=('process_'+str(i),))
pools.close()
pools.join()
global COUNT
print 'COUNT: ', COUNT
if __name__ == '__main__':
os.system('printf "\033c"')
main()
Now, finally the result of COUNT is just 0.I am unable to understand whats happening here?
You print the COUNT var in the father process. Variables doesn't sync across processes because they doesn't share memory, that means that the variable stay 0 at the father process and is increased in the subprocesses
In the case of threading, threads share memory, that means that they share the variable count, so they should have COUNT as more than 0 but again they are at the subprocesses, and when they change the variable, it doesn't update it in other processes.

Calling a function and each call works simultaneously and not in a queue

Hi i would like to know what method i could use to call a function a few times and each call is processed in parallel and NOT in a queue based processing.
Something along this line
import time
import random
def run(incoming):
time.sleep(5)
print incoming
break
while True:
hash = random.getrandbits(128)
run(hash)
time.sleep(1)
import time
import random
import threading
def run(incoming):
time.sleep(5)
print incoming
while True:
hash = random.getrandbits(128)
threading.Thread(target = run,args = (hash,)).start()
time.sleep(1)
note that this is restricted by the gil where it interleaves the processes ... but for your purposes you can probably call it parallel and since your thread count keeps growing it may eventually break down
there are much better ways to do this lets check it out
def do_hard_work(hash):
time.sleep(1)
def Run(data_pipe):
while True:
while data_pipe.poll():
hash = data_pipe.recv()
if hash == "QUIT":
break
threading.Thread(target=do_hard_work,args=(hash)).start()
time.sleep(1)
local,remote = multiprocessing.Pipe()
worker_process = multiprocessing.Process(target=run,args=(local,))
worker_process.start()
while True:
remote.send(random.getrandbits(128))
time.sleep(1)
if some_condition:
remote.send("QUIT")
break

How do I schedule a one-time script to run x minutes from now? (alternative for 'at')

(Background: I'd like to control a light source with a motion sensor. Light should turn off x minutes after last detected motion. The framework is in place, scheduling is what remains to be done.)
Currently, when motion is detected the light gets turned on and a job to turn it off in 'now + x minutes' is scheduled. Whenever motion is detected during the x minutes the job gets removed from the queue and a new one is set up, extending effectively the time the light stays on.
I tried the "at" command but job handling is quite clunky. Whenever a job is removed from the queue an email gets sent. I looked at the Python crontab module but it would need much additional programming (handling relative time, removing old cronjobs, etc.) and seems to be slower.
What are my alternatives (bash, python, perl)?
-- Edit: My python skills are at beginner level, here's what I put together:
#!/usr/bin/env python2.7
# based on http://raspi.tv/2013/how-to-use-interrupts-with-python-on-the-raspberry-pi-and-rpi-gpio-part-2
# more than 160 seconds without activity are required to re-trigger action
import time
from subprocess import call
import os
import RPi.GPIO as GPIO
PIR = 9 # data pin of PIR sensor (in)
LED = 7 # positive pin of LED (out)
timestamp = '/home/pi/events/motiontime' # file to store last motion detection time (in epoch)
SOUND = '/home/pi/events/sounds/Hello.wav' # reaction sound
# GPIO setup
GPIO.setmode(GPIO.BCM)
GPIO.setup(PIR,GPIO.IN, pull_up_down=GPIO.PUD_DOWN)
GPIO.setup(LED,GPIO.OUT)
# function which gets called when motion is reported (sensor includes own delay-until-hot again
# and sensibility settings
def my_callback(channel):
now = time.time() # store current epoch time in variable 'now'
f = open(timestamp, "r")
then = float(f.readline()) # read last detection time from file
difference = now - then # calculate time that has passed
call(['/home/pi/bin/kitchenlights.sh', '-1']) # turn light on
call(['/home/pi/bin/lighttimer.sh']) # schedule at job to turn lights off
if difference > 160: # if more than 160 seconds without activity have passed then...
GPIO.output(LED, True) # turn on LED
if not os.path.isfile("/home/pi/events/muted"): # check if system is muted, else
call(['/usr/bin/mplayer', '-really-quiet', '-noconsolecontrols', SOUND]) # play sound
GPIO.output(LED, False) # turn of LED
f = open(timestamp, "w")
f.write(repr(now)) # update timestamp
f.close()
else: # when less than 160 seconds have passed do nothing and
f = open(timestamp, "w")
f.write(repr(now)) # update timestamp (thus increasing the interval of silence)
f.close()
GPIO.add_event_detect(PIR, GPIO.RISING,callback=my_callback,bouncetime=100) # add rising edge detection on a channel
while True:
time.sleep(0.2)
pass
Now that questions come in I think I could put a countdown in the while loop, right? How would that work?
I would approach this with the threading module. To do this, you'd set up the following thread class:
class CounterThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.count = 0
self.start()
def run(self):
while self.count < COUNTLIMIT:
time.sleep(0.1)
self.count += 0.1
#Call function to turn off light here
return
def newSig(self):
self.count = 0
This is a thread which everytime it recieves a new signal (the thread's newSig function is called), the counter restarts. If the COUNTLIMIT is reached (how long you want to wait in seconds), then you call the function to turn off the light.
Here's how you'd incorporate this into your code:
import threading
from subprocess import call
import os
import time
import RPi.GPIO as GPIO
PIR = 9 # data pin of PIR sensor (in)
LED = 7 # positive pin of LED (out)
SOUND = '/home/pi/events/sounds/Hello.wav' # reaction sound
COUNTLIMIT = 160
countThread = None
WATCHTIME = 600 #Run for 10 minutes
# GPIO setup
GPIO.setmode(GPIO.BCM)
GPIO.setup(PIR,GPIO.IN, pull_up_down=GPIO.PUD_DOWN)
GPIO.setup(LED,GPIO.OUT)
#------------------------------------------------------------
class CounterThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
self.count = 0
self.start()
def run(self):
call(['/home/pi/bin/kitchenlights.sh', '-1']) # turn light on
while self.count < COUNTLIMIT:
time.sleep(0.1)
self.count += 0.1
call(['/home/pi/bin/kitchenlights.sh', '-0'])
threadKiller()
return
def newSig(self):
self.count = 0
#------------------------------------------------------------
def my_callback(channel):
'''function which gets called when motion is reported (sensor includes own delay-until-hot again and sensibility settings'''
global countThread
try:
countThread.newSig()
except:
countThread = CounterThread()
#------------------------------------------------------------
def threadKiller():
global countThread
countThread = None
#------------------------------------------------------------
def main():
GPIO.add_event_detect(PIR, GPIO.RISING,callback=my_callback,bouncetime=100) # add rising edge detection on a channel
t = 0
while t < WATCHTIME:
t += 0.1
time.sleep(0.1)
#------------------------------------------------------------
if __name__ == "__main__": main()
I don't have any way to test this, so please let me know if there is anything that breaks. Since you said you're new to Python I made a few formatting changes to make your code a bit prettier. These things are generally considered to be good form, but are optional. However, you need to be careful about indents, because as you have them in your question, your code should not run (it will throw an IndentError)
Hope this helps

Categories