I want to move some functions to an external file for making it clearer.
lets say i have this example code (which does indeed work):
import threading
from time import sleep
testVal = 0
def testFunc():
while True:
global testVal
sleep(1)
testVal = testVal + 1
print(testVal)
t = threading.Thread(target=testFunc, args=())
t.daemon = True
t.start()
try:
while True:
sleep(2)
print('testval = ' + str(testVal))
except KeyboardInterrupt:
pass
now i want to move testFunc() to a new python file. My guess was the following but the global variables don't seem to be the same.
testserver.py:
import threading
import testclient
from time import sleep
testVal = 0
t = threading.Thread(target=testclient.testFunc, args=())
t.daemon = True
t.start()
try:
while True:
sleep(2)
print('testval = ' + str(testVal))
except KeyboardInterrupt:
pass
and testclient.py:
from time import sleep
from testserver import testVal as val
def testFunc():
while True:
global val
sleep(1)
val = val + 1
print(val)
my output is:
1
testval = 0
2
3
testval = 0 (testval didn't change)
...
while it should:
1
testval = 1
2
3
testval = 3
...
any suggestions? Thanks!
Your immediate problem is not due to multithreading (we'll get to that) but due to how you use global variables. The thing is, when you use this:
from testserver import testVal as val
You're essentially doing this:
import testserver
val = testserver.testVal
i.e. you're creating a local reference val that points to the testserver.testVal value. This is all fine and dandy when you read it (the first time at least) but when you try to assign its value in your function with:
val = val + 1
You're actually re-assigning the local (to testclient.py) val variable, not setting the value of testserver.testVal. You have to directly reference the actual pointer (i.e. testserver.testVal += 1) if you want to change its value.
That being said, the next problem you might encounter might stem directly from multithreading - you can encounter a race-condition oddity where GIL pauses one thread right after reading the value, but before actually writing it, and the next thread reading it and overwriting the current value, then the first thread resumes and writes the same value resulting in single increase despite two calls. You need to use some sort of mutex to make sure that all non-atomic operations execute exclusively to one thread if you want to use your data this way. The easiest way to do it is with a Lock that comes with the threading module:
testserver.py:
# ...
testVal = 0
testValLock = threading.Lock()
# ...
testclient.py:
# ...
with testserver.testValLock:
testserver.testVal += 1
# ...
A third and final problem you might encounter is a circular dependency (testserver.py requires testclient.py, which requires testserver.py) and I'd advise you to re-think the way you want to approach this problem. If all you want is a common global store - create it separately from modules that might depend on it. That way you ensure proper loading and initializing order without the danger of unresolveable circular dependencies.
Related
I've different processes which waits for an event to occur (changing the state of a sensor)
I coded something like:
def Sensor_1():
wait_for_change_in_status
set counter to X
activate_LED_process.start()
def Sensor_2():
same function
def Sensor_3():
same function
def LED():
start LEDs
while counter > 0:
counter -= 1
time.sleep(1)
turn off LEDs
active_LED_process.join()
...
if __name__ == '__main__':
Sensor_1_process = multiprocessing.Process(target=Sensor_1)
Sensor_2_process = same
Sensor... you get it.
activate_LED_process = multiprocessing.Process(target=LED)
Now I'm stuck with the exchange of the counter value. Having different processes to be able to change the counter to a specific value.
Each sensor should be able to reset the value of the counter.
The LED process should be able to "countdown the counter" and react if the counter reached zero.
What be a proper solution? I read about values, arrays, pipes and queues.
For values and arrays I couldn't find a good documentation. Pipes seem to work only for two processes. And queues seem to not only hold one value (I'd compare a queue to a list - is this correct?)
import RPi.GPIO as GPIO
import time
import multiprocessing
import sys
GPIO.setmode(GPIO.BCM)
GPIO.setup(25, GPIO.IN, pull_up_down=GPIO.PUD_UP)
LED_time = 40 #time how long LEDs stay active (not important at this point)
def Sens_GT():
name = multiprocessing.current_process().name
print(name, 'Starting')
while True:
GPIO.wait_for_edge(25, GPIO.FALLING)
time.sleep(0.1)
print("Open")
LED_count = multiprocessing.value('i', 40) #For later: implementation of LED_time as a variable
print(LED_count) #For checking if the counter is set properly
"""
Missing code:
if "Process is already running":
go on
else:
Count_proc.start()
"""
print(name, 'Exiting') #Shouldn't happen because of the "while True:"
"""
Missing code:
def Sens_GAR():
def Sens_HT():
"""
def Count():
name = multiprocessing.current_process().name
print(name, 'Starting')
"""
Missing code:
Import counter value
"""
while countdown > 0:
print(countdown)
time.sleep(1)
LED_count -= 1
print(name, 'Exiting')
GPIO.cleanup() # clean up GPIO on normal exit
Count_proc.join()
sys.exit(1)
if __name__ == '__main__':
value_count = mutliprocessing.value('i', 0)
lock = Lock()
Sens_GT_proc = multiprocessing.Process(target=Sens_GT)
Count_proc = multiprocessing.Process(target=Count)
Sens_GT_proc.start()
Sens_GT_proc.join()
Value
seems to be a good choice for your use case.
However, you don't use it the right way.
After instanciating a value with multiprocessing.Value(), you have to pass the object as an arguments to your sub-processes, as shown in the multiprocessing guide.
So your code should be something like:
def Sens_GT(counter):
...
counter = 40
...
def Count(counter):
...
while counter > 0:
counter -= 1
time.sleep(1)
...
...
if __name__ == '__main__':
value_count = mutliprocessing.value('i', 0)
Sens_GT_proc = multiprocessing.Process(target=Sens_GT, args=(value_count,))
Count_proc = multiprocessing.Process(target=Count, args=(value_count,))
For me, pipes and queues are similar mechanisms, that are very useful in multi-processing contexts.
If you can probably use them in your case, I think they are more suited for data exchange (producers, consumers) than for shared state/value between processes.
The story begin with two threads and a global variable that change.. a lot of time :)
Thread number one (for simplicity we will call t1) generates a random number and store it in a global variable GLB.
Thread number two (aka t2) check the value of the global variable and when it reaches a value starts to print his value until a period of time.
BUT if t1 changes the value of that global variable, also change the value inside the loop and I don't want this!
I try to write pseudocode:
import random
import time
import threading
GLB = [0,0]
#this is a thread
def t1():
while True:
GLB[0] = random.randint(0, 100)
GLB[1] = 1
print GLB
time.sleep(5)
#this is a thread
def t2():
while True:
if GLB[0]<=30:
static = GLB
for i in range(50):
print i," ",static
time.sleep(1)
a = threading.Thread(target=t1)
a.start()
b = threading.Thread(target=t2)
b.start()
while True:
time.sleep(1)
The question is: why variable static change inside the loop for? It should be remain constant unitl it escapes from loop!
Could I create a lock to the variable? Or there is any other way to solve the problem?
Thanks regards.
GLB is a mutable object. To let one thread see a consistent value while another thread modifies it you can either protect the object temporarily with a lock (the modifier will wait) or copy the object. In your example, a copy seems the best option. In python, a slice copy is atomic so does not need any other locking.
import random
import time
import threading
GLB = [0,0]
#this is a thread
def t1():
while True:
GLB[0] = random.randint(0, 100)
GLB[1] = 1
print GLB
time.sleep(5)
#this is a thread
def t2():
while True:
static = GLB[:]
if static[0]<=30:
for i in range(50):
print i," ",static
time.sleep(1)
a = threading.Thread(target=t1)
a.start()
b = threading.Thread(target=t2)
b.start()
while True:
time.sleep(1)
In Java tryLock(long time, TimeUnit unit) can be used as a non-blocking attempt to acquire the lock. How can the equivalent in python be achieved? (Pythonic | idiomatic way is preferred!)
Java tryLock:
ReentrantLock lock1 = new ReentrantLock()
if (lock1.tryLock(13, TimeUnit.SECONDS)) { ... }
Python Lock:
import threading
lock = Lock()
lock.acquire() # how to lock.acquire(timeout = 13) ?
The "try lock" behaviour can be obtained using threading module's Lock.acquire(False) (see the Python doc):
import threading
import time
my_lock = threading.Lock()
successfully_acquired = my_lock.acquire(False)
if successfully_acquired:
try:
print "Successfully locked, do something"
time.sleep(1)
finally:
my_lock.release()
else:
print "already locked, exit"
I can't figure out a satisfactory way to use with here.
Ouch, my bad!
I should have read the python reference for Locks to begin with!
Lock.acquire([blocking])
When invoked with the blocking argument set to False, do not block.
If a call with blocking set to True would block, return False
immediately; otherwise, set the lock to locked and return True.
So I can just do something like this (or something more advanced even :P ):
import threading
import time
def my_trylock(lock, timeout):
count = 0
success = False
while count < timeout and not success:
success = lock.acquire(False)
if success:
break
count = count + 1
time.sleep(1) # should be a better way to do this
return success
lock = threading.Lock()
my_trylock(lock, 13)
The question may be really stupid but I'm working on this code since this morning and now even stupid things are hard :\
I've got this code and I call it by making 8 processes and run them.
Then there's another thread that has to print infos about this 8 processes. (code is below).
import MSCHAPV2
import threading
import binascii
import multiprocessing
class CrackerThread(multiprocessing.Process):
password_header = "s."
current_pin = ""
username = ""
server_challenge = ""
peer_challenge = ""
nt_response = ""
starting_pin = 0
limit = 0
testing_pin = 0
event = None
def __init__(self, username, server_challenge, peer_challenge, nt_response, starting_pin, limit, event):
#threading.Thread.__init__(self)
super(CrackerThread, self).__init__()
self.username = username
self.server_challenge = server_challenge
self.peer_challenge = peer_challenge
self.nt_response = nt_response
self.starting_pin = starting_pin
self.limit = limit
self.event = event
self.testing_pin = starting_pin
#self.setDaemon(True)
def run(self):
mschap = MSCHAPV2.MSCHAPV2()
pin_range = self.starting_pin+self.limit
while self.testing_pin <= pin_range and not self.event.isSet():
self.current_pin = "%s%08d" % (self.password_header, self.testing_pin)
if(mschap.CheckPassword(self.server_challenge, self.peer_challenge, self.username, self.current_pin.encode("utf-16-le"), self.nt_response)):
self.event.set()
print "Found valid password!"
print "user =", self.username
print "password =", self.current_pin
self.testing_pin+=1
print "Thread for range (%d, %d) ended with no success." % (self.starting_pin, pin_range)
def getCurrentPin(self):
return self.testing_pin
def printCrackingState(threads):
info_string = '''
++++++++++++++++++++++++++++++++++
+ Starting password = s.%08d +
+--------------------------------+
+ Current pin = s.%08d +
++++++++++++++++++++++++++++++++++
+ Missing pins = %08d +
++++++++++++++++++++++++++++++++++
'''
while 1:
for t in threads:
printed_string = info_string % (t.starting_pin, t.getCurrentPin(), t.getMissingPinsCount())
sys.stdout.write(printed_string)
sys.stdout.write("--------------------------------------------------------------------")
time.sleep(30)
printCrackingState is called by these lines in my "main":
infoThread = threading.Thread(target = utils.printCrackingState, args=([processes]))
#infoThread = cursesTest.CursesPrinter(threads, processes, event)
infoThread.setDaemon(True)
infoThread.start()
Now the quesion is: why t.starting_pin and t.getCurrentPin() print the SAME value?
It's like the t.getCurrentPin() returns the value set in the __init__() method and is not aware that I'm incrementing it!
Suggestions?
Your problem here is that you're trying to update a variable in one process, and read it in another process. You can't do that. The whole point of multiprocessing, as opposed to multithreading, is that variables are not shared by default.
Read the docs, especially Exchanging objects between processes and Sharing state between processes, and it will explain the various ways around this. But really, there's two: either you need some kind of channel/API to let the parent process ask the child process for its current state, or you need some kind of shared memory to store the data in. And you may need a lock to protect either the channel/shared memory.
While shared memory may seem like the "obvious" answer here, you may want to time the following:
val = 0
for i in range(10000):
val += 1
val = Value('i', 0)
lock = Lock()
for i in range(10000):
with lock:
val.value += 1
It's worth noting that your code would also be incorrect with threads—although it would probably work, in CPython. If you don't do any synchronization, there is no guaranteed ordering. If you write a value in one thread and read it "later" in another thread, you can still read the older value. How much later? Well, if thread 0 runs on core 0, and thread 1 on core 1, and they both have the variable in their cache, and nobody tells the CPUs to flush the cache, thread 1 will go on reading the old value forever. In practice, CPython's Global Interpreter Lock eventually synchronizes everything implicitly (so we're talking milliseconds rather than infinity), and all variables have explicit memory locations rather than being, say, optimized into registers, and so on, so you can usually get away with writing unprotected races. But, thanks to Murphy's Law, you should read "usually" as "every time until the first demo to the investors" or "until we attach the live nuclear reactor".
I'd like my Python program to run an algorithm for a given number of seconds and then to print the best result so far and to end.
What is the best way to do so?
I tried the following but it did not work(the program kept running after the printing):
def printBestResult(self):
print(self.bestResult)
sys.exit()
def findBestResult(self,time):
self.t = threading.Timer(time, self.printBestResult)
self.t.start()
while(1):
# find best result
Untested code, but something like this?
import time
threshold = 60
start = time.time()
best_run = threshold
while time.time()-start < threshold:
run_start = time.time()
doSomething()
run_time = time.time() - start
if run_time < best_run:
best_run = run_time
On unix, you can use signals -- This code times out after 1 second and counts how many times it iterates through the while loop in that time:
import signal
import sys
def handle_alarm(args):
print args.best_val
sys.exit()
class Foo(object):
pass
self=Foo() #some mutable object to mess with in the loop
self.best_val=0
signal.signal(signal.SIGALRM,lambda *args: handle_alarm(self))
signal.alarm(1) #timeout after 1 second
while True:
self.best_val+=1 # do something to mutate "self" here.
Or, you could easily have your alarm_handler raise an exception which you then catch outside the while loop, printing your best result.
If you want to do this with threads, a good way is to use an Event. Note that signal.alarm won't work in Windows, so I think threading is your best bet unless in that case.
import threading
import time
import random
class StochasticSearch(object):
def __init__(self):
self.halt_event = threading.Event()
def find_best_result(self, duration):
halt_thread = threading.Timer(duration, self.halt_event.set)
halt_thread.start()
best_result = 0
while not self.halt_event.is_set():
result = self.search()
best_result = result if result > best_result else best_result
time.sleep(0.5)
return best_result
def search(self):
val = random.randrange(0, 10000)
print 'searching for something; found {}'.format(val)
return val
print StochasticSearch().find_best_result(3)
You need an exit condition, or the program will run forever (or until it runs out of memory). Add one yourself.