read a variable while multithreading - python

I get two streams of data from an API, so there are 3 threads, main one, stream1 and stream2. Stream1 and Stream2 need to process this data and once they're done they store them on main_value1 and main_value2.
From main thread I need to read the last value at any given time (so if I need this value and it is still processing then I get the last processed/stored one), what would be the optimal way? from the code example here I need help in coding functions get_main_value1() and, of course, get_main_value2()
def stream1():
while True:
main_value1 = process()
def stream2():
while True:
main_value2 = process2()
def get_main_value1(): ?
def get main_value2(): ?
def main():
threading.Thread(function=stream1,).start()
threading.Thread(function=stream2).start()
while True:
time.sleep(random.randint(0,10))
A = get_main_value1()
B = get_main_value2()

One way would be to make them global:
STREAM1_LAST_VALUE = None
def stream1():
global STREAM1_LAST_VALUE
while True:
main_value1 = process()
STREAM1_LAST_VALUE = main_value1
STREAM2_LAST_VALUE = None
def stream2():
global STREAM2_LAST_VALUE
while True:
main_value2 = process2()
STREAM2_LAST_VALUE = main_value2
def get_main_value1():
return STREAM1_LAST_VALUE
def get main_value2():
return STREAM2_LAST_VALUE
def main():
threading.Thread(function=stream1,).start()
threading.Thread(function=stream2).start()
while True:
time.sleep(random.randint(0,10))
A = get_main_value1()
B = get_main_value2()

Related

Python asynchronous call that updates a variable

I have a function def act(obs) that returns a float and is computationally expensive (takes some time to run).
import time
import random
def act(obs):
time.sleep(5) # mimic computation time
action = random.random()
return action
I regularly keep calling this function in a script faster than how long it takes for it to execute. I do not want any waiting time when calling the function. Rather I prefer using the returned value from an earlier computation. How do I achieve this?
Something I have thought of is having a global variable that updated in the function and I keep reading the global variable although I am not sure if that is the best way to achieve it.
This is what I ended up using based on this answer
class MyClass:
def __init__(self):
self.is_updating = False
self.result = -1
def _act(self, obs):
self.is_updating = True
time.sleep(5)
self.result = obs
self.is_updating = False
def act(self, obs):
if not self.is_updating:
threading.Thread(target=self._act, args=[obs]).start()
return self.result
agent = MyClass()
i = 0
while True:
agent.act(obs=i)
time.sleep(2)
print(i, agent.result)
i += 1
The global variable way should work, Also you can have a class that has a private member let's say result and a flag isComputing and a method getResult which would call a method compute()(via a thread) if it is not computing currently, and returns the previous result. The compute() method should update the flag isComputing properly.

How to change "for" into a multithreaded pool in python

So I made this program that I want to loop for ever until closed. So at the moment I use this piece of code;
while True:
a = start();
for aaa in a:
check(a[aaa], 0)
But that is pretty slow. How can I multithread this using this (this is my try, it's incorrect ofcourse);
pool = ThreadPool(threads)
results = pool.map(check, a, 0)
I tried that code, with threads = 1. And it just gave nothing. Could anyone help me with this?
==== EDIT ====
Start function;
def start():
global a
url = "URL_WAS_HERE" // receives a json like {"a":56564356, "b":654653453} etc. etc.
r = requests.get(url)
a = json.loads(r.text)
return a
Check function;
def check(idd, tries):
global checked
global snipe
global notworking
if tries < 1:
checked = checked+1
url = "URL_WAS_HERE"+str(idd) // Receives json with extra information about the id
r = requests.get(url)
try:
b = json.loads(r.text)
if b['rap'] > b['best_price']:
difference = b['rap']-b['best_price'];
print(str(idd)+" has a "+str(difference)+ "R$ difference. Price: "+str(b['best_price'])+" //\\ Rap: "+str(b['rap']))
snipe = snipe+1
except:
time.sleep(1)
tries = tries+1
notworking = notworking+1
check(idd, tries)
settitle("Snipes; "+str(snipe)+" //\\ Checked; "+str(checked)+" //\\ Errors; "+str(notworking))
I hope this helps a bit
Perhaps start by using a documented object, ThreadPoolExecutor. ThreadPool is an undocumented language feature.
The docs offer minimal examples to get you started. For your example try the following construction:
from concurrent.futures import ThreadPoolExecutor, as_completed
values_to_test = a()
result_container = []
with ThreadPoolExecutor(max_workers=2) as executor: # set `max_workers` as appropriate
pool = {executor.submit(check, val, tries=0): val for val in values_to_test}
for future in as_completed(pool):
try:
result_container.append(future.result())
except:
pass # handle exceptions here
If you are set on using the map method, you cannot pass 0 as an argument because it is not an iterable; see the method signature.

python thread pool copy parameters

I'm learning about multithreading and I try to implement a few things to understand it.
After reading several (and very technical topics) I cannot find a solution or way to understand my issue.
Basically, I have the following structure:
class MyObject():
def __init__():
self.lastupdate = datetime.datetime.now()
def DoThings():
...
def MyThreadFunction(OneOfMyObject):
OneOfMyObject.DoThings()
OneOfMyObject.lastupdate = datetime.datetime.now()
def main():
MyObject1 = MyObject()
MyObject2 = MyObject()
MyObjects = [MyObject1, MyObject2]
pool = Pool(2)
while True:
pool.map(MyThreadFunction, MyObjects)
if __name__ == '__main__':
main()
I think the function .map make a copy of my objects because it does not update the time. Is it right ? if yes, how could I input a Global version of my objects. If not, would you have any idea why the time is fixed in my objects ?
When I check the new time with print(MyObject.lastupdate), the time is right, but not in the next loop
Thank you very much for any of your ideas
Yes, python threading will serialize (actually, pickle) your objects and then reconstruct them in the thread. However, it also sends them back. To recover them, see the commented additions to the code below:
class MyObject():
def __init__():
self.lastupdate = datetime.datetime.now()
def DoThings():
...
def MyThreadFunction(OneOfMyObject):
OneOfMyObject.DoThings()
OneOfMyObject.lastupdate = datetime.datetime.now()
# NOW, RETURN THE OBJECT
return oneOfMyObject
def main():
MyObject1 = MyObject()
MyObject2 = MyObject()
MyObjects = [MyObject1, MyObject2]
with Pool(2) as pool: # <- this is just a neater way of doing it than a while loop for various reasons. Checkout context managers if interested.
# Now we recover a list of the updated objects:
processed_object_list = pool.map(MyThreadFunction, MyObjects)
# Now inspect
for my_object in processed_object_list:
print(my_object.lastupdate)
if __name__ == '__main__':
main()

When I'm testing about multiprocessing and threading with python, and I meet a odd situation

I am using process pools(including 3 processes). In every process, I have set (created) some threads by using the thread classes to speed handle something.
At first, everything was OK. But when I wanted to change some variable in a thread, I met an odd situation.
For testing or to know what happens, I set a global variable COUNT to test. Honestly, I don't know this is safe or not. I just want to see, by using multiprocessing and threading can I change COUNT or not?
#!/usr/bin/env python
# encoding: utf-8
import os
import threading
from Queue import Queue
from multiprocessing import Process, Pool
# global variable
max_threads = 11
Stock_queue = Queue()
COUNT = 0
class WorkManager:
def __init__(self, work_queue_size=1, thread_pool_size=1):
self.work_queue = Queue()
self.thread_pool = [] # initiate, no have a thread
self.work_queue_size = work_queue_size
self.thread_pool_size = thread_pool_size
self.__init_work_queue()
self.__init_thread_pool()
def __init_work_queue(self):
for i in xrange(self.work_queue_size):
self.work_queue.put((func_test, Stock_queue.get()))
def __init_thread_pool(self):
for i in xrange(self.thread_pool_size):
self.thread_pool.append(WorkThread(self.work_queue))
def finish_all_threads(self):
for i in xrange(self.thread_pool_size):
if self.thread_pool[i].is_alive():
self.thread_pool[i].join()
class WorkThread(threading.Thread):
def __init__(self, work_queue):
threading.Thread.__init__(self)
self.work_queue = work_queue
self.start()
def run(self):
while self.work_queue.qsize() > 0:
try:
func, args = self.work_queue.get(block=False)
func(args)
except Queue.Empty:
print 'queue is empty....'
def handle(process_name):
print process_name, 'is running...'
work_manager = WorkManager(Stock_queue.qsize()/3, max_threads)
work_manager.finish_all_threads()
def func_test(num):
# use a global variable to test what happens
global COUNT
COUNT += num
def prepare():
# prepare test queue, store 50 numbers in Stock_queue
for i in xrange(50):
Stock_queue.put(i)
def main():
prepare()
pools = Pool()
# set 3 process
for i in xrange(3):
pools.apply_async(handle, args=('process_'+str(i),))
pools.close()
pools.join()
global COUNT
print 'COUNT: ', COUNT
if __name__ == '__main__':
os.system('printf "\033c"')
main()
Now, finally the result of COUNT is just 0.I am unable to understand whats happening here?
You print the COUNT var in the father process. Variables doesn't sync across processes because they doesn't share memory, that means that the variable stay 0 at the father process and is increased in the subprocesses
In the case of threading, threads share memory, that means that they share the variable count, so they should have COUNT as more than 0 but again they are at the subprocesses, and when they change the variable, it doesn't update it in other processes.

Handle multiple IO errors

I have several IO operation that I carry out on class init but they often fail with IOError. What I would like to do is delay a few hundred ms and try again until success or some defined timeout. How can I make sure each individual command succeeds before continuing/ending the loop? I assume there is a better way than an if statement for each item and a counter to check if all commands succeeded.
My current code below often fails with IOError and hangs the rest of the application.
def __init__(self):
print("Pressure init.")
self.readCoefficients()
def readCoefficients(self):
global a0_MSB;
global a0_LSB;
global b1_MSB;
global b1_LSB;
global b2_MSB;
global b2_LSB;
global c12_MSB;
global c12_LSB;
a0_MSB = Pressure.bus.read_byte_data(Pressure.MPL115A2_ADDRESS,Pressure.MPL115A2_REGISTER_A0_COEFF_MSB+0);
a0_LSB = Pressure.bus.read_byte_data(Pressure.MPL115A2_ADDRESS,Pressure.MPL115A2_REGISTER_A0_COEFF_LSB+0);
b1_MSB = Pressure.bus.read_byte_data(Pressure.MPL115A2_ADDRESS,Pressure.MPL115A2_REGISTER_B1_COEFF_MSB+0);
b1_LSB = Pressure.bus.read_byte_data(Pressure.MPL115A2_ADDRESS,Pressure.MPL115A2_REGISTER_B1_COEFF_LSB+0);
b2_MSB = Pressure.bus.read_byte_data(Pressure.MPL115A2_ADDRESS,Pressure.MPL115A2_REGISTER_B2_COEFF_MSB+0);
b2_LSB = Pressure.bus.read_byte_data(Pressure.MPL115A2_ADDRESS,Pressure.MPL115A2_REGISTER_B2_COEFF_LSB+0);
c12_MSB = Pressure.bus.read_byte_data(Pressure.MPL115A2_ADDRESS,Pressure.MPL115A2_REGISTER_C12_COEFF_MSB+0);
c12_LSB = Pressure.bus.read_byte_data(Pressure.MPL115A2_ADDRESS,Pressure.MPL115A2_REGISTER_C12_COEFF_LSB+0);
Are you wanting to retry each one of those last 8 lines independently or as a group? If independently you will want to make a little helper function:
def retry_function(tries, function, *args, **kwargs):
for try in range(tries):
try:
return function(*args, **kwargs)
except IOError as e:
time.sleep(.005)
raise e # will be the last error from inside the loop. be sure tries is at least 1 or this will be undefined!
Then call it like this:
a0_MSB = retry_function(5, Pressure.bus.read_byte_data, Pressure.MPL115A2_ADDRESS,Pressure.MPL115A2_REGISTER_A0_COEFF_MSB+0)
If not independently but as a group, you probably still want this helper function. But you'll have to rewrite it to handle a list of functions/arguments, or pass in another custom function
If it's OK for you that all the files are read one after the other, you can use a simple function.
import time
# ...
def readCoefficients(self):
global a0_MSB;
global a0_LSB;
global b1_MSB;
global b1_LSB;
global b2_MSB;
global b2_LSB;
global c12_MSB;
global c12_LSB;
max_retries = 15
a0_MSB = self.readretry(Pressure.MPL115A2_REGISTER_A0_COEFF_MSB+0, max_retries)
a0_LSB = self.readretry(Pressure.MPL115A2_REGISTER_A0_COEFF_LSB+0, max_retries)
b1_MSB = self.readretry(Pressure.MPL115A2_REGISTER_B1_COEFF_MSB+0, max_retries)
b1_LSB = self.readretry(Pressure.MPL115A2_REGISTER_B1_COEFF_LSB+0, max_retries)
b2_MSB = self.readretry(Pressure.MPL115A2_REGISTER_B2_COEFF_MSB+0, max_retries)
b2_LSB = self.readretry(Pressure.MPL115A2_REGISTER_B2_COEFF_LSB+0, max_retries)
c12_MSB = self.readretry(Pressure.MPL115A2_REGISTER_C12_COEFF_MSB+0, max_retries)
c12_LSB = self.readretry(Pressure.MPL115A2_REGISTER_C12_COEFF_LSB+0, max_retries)
def readretry(self, address, max_retries):
for i in range(max_retries):
try:
return Pressure.bus.read_byte_data(
Pressure.MPL115A2_ADDRESS,
address
)
except IOError as e:
# print(e)
time.sleep(0.1)
else:
raise IOError("Reading failed after multiple tries")
Note: You should not use globals, most specially in classes.
This is another way of doing it. this code tries to read all addresses, and saves the one that failed. Then waits a little and retries all the addresses that failed until all addresses have been read properly or the number of allowed retries exceeded.
def readCoefficients(self):
(
a0_MSB, a0_LSB,
b1_MSB, b1_LSB,
b2_MSB, b2_LSB,
c12_MSB, c12_LSB) = self.mio_read(15,
Pressure.MPL115A2_REGISTER_A0_COEFF_MSB+0,
Pressure.MPL115A2_REGISTER_A0_COEFF_LSB+0,
Pressure.MPL115A2_REGISTER_B1_COEFF_MSB+0,
Pressure.MPL115A2_REGISTER_B1_COEFF_LSB+0,
Pressure.MPL115A2_REGISTER_B2_COEFF_MSB+0,
Pressure.MPL115A2_REGISTER_B2_COEFF_LSB+0,
Pressure.MPL115A2_REGISTER_C12_COEFF_MSB+0,
Pressure.MPL115A2_REGISTER_C12_COEFF_LSB+0
)
def mio_read(self, max_retries, *addresses):
# Create storage for results
results = [None] * len(addresses)
# Keep track of the index of a particular address in the list of results
ios = list(enumerate(addresses))
for i in range(max_retries):
failedios = []
for index, address in ios:
try:
results[index] = Pressure.bus.read_byte_data(
Pressure.MPL115A2_ADDRESS,
address
)
except IOError as e:
# Place address in the queue for the next round
failedios.append((index, address))
# If all succeeded
if len(failedios) == 0:
return results
# Time may be reduced as so was spent checking other addresses
time.sleep(0.1)
ios = failedios
else:
raise IOError(",".join((addr for ind, addr in failedios)))

Categories