I remember seeing a post somewhere about being able to get the drawing in python off the main thread, but I can't seem to find it. My first attempt goes something like this but it doesn't work. It doesn't crash initially (it does eventually) but no drawing takes place. The idea is that options is a map of drawing functions each of which draws to a pyqtgraph, or a QTWidget, etc
from threading import *
from Queue import *
anObject1 = DrawingObject()
anObject2 = DrawingObject()
anObject3 = DrawingObject()
options = {
0 : anObject1.drawing_func,
1 : anObject2.drawing_func,
2 : anObject3.drawing_func,
3 : updateNon,
}
def do_work(item): #item is a tuple with the item at 0 is the index to which function
#print str(item) + "\n"
options[item[0]](item)
def worker():
while True:
item = q.get()
do_work(item)
q.task_done()
q = Queue()
#This function is a callback from C++
def callback(s, tuple):
#options[tuple[0]](tuple) #this works
q.put(tuple) #this does not
num_worker_threads = 3
for i in range(num_worker_threads):
t = Thread(target=worker)
t.daemon = True
t.start()
My understanding is that it is not possible to draw to a QWidget outside the main GUI thread. You can find many references to this in the Qt forums and documentation. However, it is possible to start a subprocess that draws into an image in shared memory, and then display the image in the main process. This is the approach taken by pyqtgraph/widgets/RemoteGraphicsView.py; see examples/RemoteSpeedTest.py for an example of this.
Related
I have a generator that looks kind of like this:
class GeneratorClass():
def __init__(self, objClient):
self.clienteGen = objClient
def generatorDataClient(self):
amount = 0
while True:
amount += random.randint(0, 2000)
yield amount
sleep = random.choice([1,2,3])
print("sleep " + str(sleep))
time.sleep(sleep)
Then I iterate through it, which works: it does the current_mean() method each time new data is generated.
def iterate_clients(pos):
genobject4 = GeneratorClass(client_list[pos])
generator4 = genobject4.generatorDataClient()
current_client = genobject1.default_client
account1 = current_client.account
cnt = 0
acc_mean = 0
for item in generator4:
#We call a function previously defined
acc_mean, cnt = account1.current_mean(acc_mean, item, cnt)
print("media : " + str(acc_mean), str(cnt))
# iterate_clients(2)
And it works, you give it a valid client, it starts doing the generation operation, which is a moving average, and since it is defined with a While: True it does not stop.
Now I wanted to paralellize this and I managed to get it work, but only once:
names = ["James", "Anna"]
client_list = [Cliente(name) for name in names]
array_length = len(client_list)
import multiprocessing
if __name__ == '__main__':
for i in range(array_length):
p = multiprocessing.Process(target=iterate_clients, args=(i,))
p.start()
But instead each process starts, iterates exactly once, then stops. The result is the following:
calling object with ID: 140199258213624
calling the generator
moving average : 4622.0 1
calling object with ID: 140199258211160
sleep 2
calling the generator
moving average : 8013.0 1
sleep 1
I am sure the code can be improved but could it be I am missing some information on how to parallelize this problem in particular?
Edit:
Thanks to this answer I tried changing the loop from for i in range(array_length): to while True:
And I got something new:
calling object 140199258211160
calling the generator
calling object 140199258211160
moving average : 7993.0 1
calling the generator
duerme 3
calling object 140199258211160
calling the generator
calling object 140199258211160
moving average : 8000.0 1
moving average : 7869.0 1
duerme 3
calling the generator
And it never stops. So from this I get that I am making a huge mistake, because only 1 process gets created, and it has what seems to be a race condition, since the moving average goes back and forth and it only goes up in a normal process.
The issue here is probably that the child processes are terminating once the main process finishes so they never get a chance to fully run.
Using .join() will make the main process wait for the child processes.
...
import multiprocessing
procs = []
if __name__ == '__main__':
for i in range(array_length):
p = multiprocessing.Process(target=iterate_clients, args=(i,))
p.start()
procs.append(p) # Hold a reference to each child process
for proc in procs:
proc.join() # Wait for each child process
I want to move some functions to an external file for making it clearer.
lets say i have this example code (which does indeed work):
import threading
from time import sleep
testVal = 0
def testFunc():
while True:
global testVal
sleep(1)
testVal = testVal + 1
print(testVal)
t = threading.Thread(target=testFunc, args=())
t.daemon = True
t.start()
try:
while True:
sleep(2)
print('testval = ' + str(testVal))
except KeyboardInterrupt:
pass
now i want to move testFunc() to a new python file. My guess was the following but the global variables don't seem to be the same.
testserver.py:
import threading
import testclient
from time import sleep
testVal = 0
t = threading.Thread(target=testclient.testFunc, args=())
t.daemon = True
t.start()
try:
while True:
sleep(2)
print('testval = ' + str(testVal))
except KeyboardInterrupt:
pass
and testclient.py:
from time import sleep
from testserver import testVal as val
def testFunc():
while True:
global val
sleep(1)
val = val + 1
print(val)
my output is:
1
testval = 0
2
3
testval = 0 (testval didn't change)
...
while it should:
1
testval = 1
2
3
testval = 3
...
any suggestions? Thanks!
Your immediate problem is not due to multithreading (we'll get to that) but due to how you use global variables. The thing is, when you use this:
from testserver import testVal as val
You're essentially doing this:
import testserver
val = testserver.testVal
i.e. you're creating a local reference val that points to the testserver.testVal value. This is all fine and dandy when you read it (the first time at least) but when you try to assign its value in your function with:
val = val + 1
You're actually re-assigning the local (to testclient.py) val variable, not setting the value of testserver.testVal. You have to directly reference the actual pointer (i.e. testserver.testVal += 1) if you want to change its value.
That being said, the next problem you might encounter might stem directly from multithreading - you can encounter a race-condition oddity where GIL pauses one thread right after reading the value, but before actually writing it, and the next thread reading it and overwriting the current value, then the first thread resumes and writes the same value resulting in single increase despite two calls. You need to use some sort of mutex to make sure that all non-atomic operations execute exclusively to one thread if you want to use your data this way. The easiest way to do it is with a Lock that comes with the threading module:
testserver.py:
# ...
testVal = 0
testValLock = threading.Lock()
# ...
testclient.py:
# ...
with testserver.testValLock:
testserver.testVal += 1
# ...
A third and final problem you might encounter is a circular dependency (testserver.py requires testclient.py, which requires testserver.py) and I'd advise you to re-think the way you want to approach this problem. If all you want is a common global store - create it separately from modules that might depend on it. That way you ensure proper loading and initializing order without the danger of unresolveable circular dependencies.
I have a python GUI program that needs to do a same task but with several threads. The problem is that I call the threads but they don't execute parallel but sequentially. First one executes, it ends and then second one, etc. I want them to start independently.
The main components are:
1. Menu (view)
2. ProcesStarter (controller)
3. Process (controller)
The Menu is where you click on the "Start" button which calls a function at ProcesStarter.
The ProcesStarter creates objects of Process and threads, and starts all threads in a for-loop.
Menu:
class VotingFrame(BaseFrame):
def create_widgets(self):
self.start_process = tk.Button(root, text="Start Process", command=lambda: self.start_process())
self.start_process.grid(row=3,column=0, sticky=tk.W)
def start_process(self):
procesor = XProcesStarter()
procesor_thread = Thread(target=procesor.start_process())
procesor_thread.start()
ProcesStarter:
class XProcesStarter:
def start_process(self):
print "starting new process..."
# thread count
thread_count = self.get_thread_count()
# initialize Process objects with data, and start threads
for i in range(thread_count):
vote_process = XProcess(self.get_proxy_list(), self.get_url())
t = Thread(target=vote_process.start_process())
t.start()
Process:
class XProcess():
def __init__(self, proxy_list, url, browser_show=False):
# init code
def start_process(self):
# code for process
When I press the GUI button for "Start Process" the gui is locked until both threads finish execution.
The idea is that threads should work in the background and work in parallel.
you call procesor.start_process() immediately when specifying it as the target of the Thread:
#use this
procesor_thread = Thread(target=procesor.start_process)
#not this
procesor_thread = Thread(target=procesor.start_process())
# this is called right away ^
If you call it right away it returns None which is a valid target for Thread (it just does nothing) which is why it happens sequentially, the threads are not doing anything.
One way to use a class as the target of a thread is to use the class as the target, and the arguments to the constructor as args.
from threading import Thread
from time import sleep
from random import randint
class XProcesStarter:
def __init__(self, thread_count):
print ("starting new process...")
self._i = 0
for i in range(thread_count):
t = Thread(
target=XProcess,
args=(self.get_proxy_list(), self.get_url())
)
t.start()
def get_proxy_list(self):
self._i += 1
return "Proxy list #%s" % self._i
def get_url(self):
self._i += 1
return "URL #%d" % self._i
class XProcess():
def __init__(self, proxy_list, url, browser_show=False):
r = 0.001 * randint( 1, 5000)
sleep(r)
print (proxy_list)
print (url)
def main():
t = Thread( target=XProcesStarter, args=(4, ) )
t.start()
if __name__ == '__main__':
main()
This code runs in python2 and python3.
The reason is that the target of a Thread object must be a callable (search for "callable" and "__call__" in python documentation for a complete explanation).
Edit The other way has been explained in other people's answers (see Tadhg McDonald-Jensen).
I think your issue is that in both places you're starting threads, you're actually calling the method you want to pass as the target to the thread. That runs its code in the main thread (and tries to start the new thread on the return value, if any, once its done).
Try:
procesor_thread = Thread(target=procesor.start_process) # no () after start_process
And:
t = Thread(target=vote_process.start_process) # no () here either
I have an application that fires up a series of threads. Occassionally, one of these threads dies (usually due to a network problem). How can I properly detect a thread crash and restart just that thread? Here is example code:
import random
import threading
import time
class MyThread(threading.Thread):
def __init__(self, pass_value):
super(MyThread, self).__init__()
self.running = False
self.value = pass_value
def run(self):
self.running = True
while self.running:
time.sleep(0.25)
rand = random.randint(0,10)
print threading.current_thread().name, rand, self.value
if rand == 4:
raise ValueError('Returned 4!')
if __name__ == '__main__':
group1 = []
group2 = []
for g in range(4):
group1.append(MyThread(g))
group2.append(MyThread(g+20))
for m in group1:
m.start()
print "Now start second wave..."
for p in group2:
p.start()
In this example, I start 4 threads then I start 4 more threads. Each thread randomly generates an int between 0 and 10. If that int is 4, it raises an exception. Notice that I don't join the threads. I want both group1 and group2 list of threads to be running. I found that if I joined the threads it would wait until the thread terminated. My thread is supposed to be a daemon process, thus should rarely (if ever) hit the ValueError Exception this example code is showing and should be running constantly. By joining it, the next set of threads doesn't begin.
How can I detect that a specific thread died and restart just that one thread?
I have attempted the following loop right after my for p in group2 loop.
while True:
# Create a copy of our groups to iterate over,
# so that we can delete dead threads if needed
for m in group1[:]:
if not m.isAlive():
group1.remove(m)
group1.append(MyThread(1))
for m in group2[:]:
if not m.isAlive():
group2.remove(m)
group2.append(MyThread(500))
time.sleep(5.0)
I took this method from this question.
The problem with this, is that isAlive() seems to always return True, because the threads never restart.
Edit
Would it be more appropriate in this situation to use multiprocessing? I found this tutorial. Is it more appropriate to have separate processes if I am going to need to restart the process? It seems that restarting a thread is difficult.
It was mentioned in the comments that I should check is_active() against the thread. I don't see this mentioned in the documentation, but I do see the isAlive that I am currently using. As I mentioned above, though, this returns True, thus I'm never able to see that a thread as died.
I had a similar issue and stumbled across this question. I found that join takes a timeout argument, and that is_alive will return False once the thread is joined. So my audit for each thread is:
def check_thread_alive(thr):
thr.join(timeout=0.0)
return thr.is_alive()
This detects thread death for me.
You could potentially put in an a try except around where you expect it to crash (if it can be anywhere you can do it around the whole run function) and have an indicator variable which has its status.
So something like the following:
class MyThread(threading.Thread):
def __init__(self, pass_value):
super(MyThread, self).__init__()
self.running = False
self.value = pass_value
self.RUNNING = 0
self.FINISHED_OK = 1
self.STOPPED = 2
self.CRASHED = 3
self.status = self.STOPPED
def run(self):
self.running = True
self.status = self.RUNNING
while self.running:
time.sleep(0.25)
rand = random.randint(0,10)
print threading.current_thread().name, rand, self.value
try:
if rand == 4:
raise ValueError('Returned 4!')
except:
self.status = self.CRASHED
Then you can use your loop:
while True:
# Create a copy of our groups to iterate over,
# so that we can delete dead threads if needed
for m in group1[:]:
if m.status == m.CRASHED:
value = m.value
group1.remove(m)
group1.append(MyThread(value))
for m in group2[:]:
if m.status == m.CRASHED:
value = m.value
group2.remove(m)
group2.append(MyThread(value))
time.sleep(5.0)
I'm about to put this design into use in an application, but I'm fairly new to threading and Queue stuff in python. Obviously the actual application is not for saying hello, but the design is the same - i.e. there is a process which takes some time to set-up and tear down, but I can do multiple tasks in one hit. Tasks will arrive at random times, and often in bursts.
Is this a sensible and thread safe design?
class HelloThing(object):
def __init__(self):
self.queue = self._create_worker()
def _create_worker(self):
import threading, Queue
def worker():
while True:
things = [q.get()]
while True:
try:
things.append(q.get_nowait())
except Queue.Empty:
break
self._say_hello(things)
[q.task_done() for task in xrange(len(things))]
q = Queue.Queue()
n_worker_threads = 1
for i in xrange(n_worker_threads):
t = threading.Thread(target=worker)
t.daemon = True
t.start()
return q
def _say_hello(self, greeting_list):
import time, sys
# setup stuff
time.sleep(1)
# do some things
sys.stdout.write('hello {0}!\n'.format(', '.join(greeting_list)))
# tear down stuff
time.sleep(1)
if __name__ == '__main__':
print 'enter __main__'
import time
hello = HelloThing()
hello.queue.put('world')
hello.queue.put('cruel world')
hello.queue.put('stack overflow')
time.sleep(2)
hello.queue.put('a')
hello.queue.put('b')
time.sleep(2)
for i in xrange(20):
hello.queue.put(str(i))
#hello.queue.join()
print 'finish __main__'
The thread safety is handled by Queue implementation (also you must handle in your _say_hello implementation if it is required).
Burst handler problem: A burst should be handled by a single thread only.(ex: let's say your process setup/teardown takes 10 seconds; at second 1 all threads will be busy with burst from sec 0, on second 5 a new task(or burst) but no thread available to handle them/it). So a burst should be defined by max number of tasks (or maybe "infinite") for a specific time-window. An entry in queue should be a list of tasks.
How can you group burst tasks list?
I provide a solution as code, more easy to explain ...
producer_q = Queue()
def _burst_thread():
while True:
available_tasks = [producer_q.get()]
time.sleep(BURST_TIME_WINDOW)
available_tasks.extend(producer_q.get() # I'm the single consumer, so will be at least qsize elements
for i in range(producer_q.qsize()))
consumer_q.push(available_tasks)
If you want to have a maximum of messages in a burst, you just need to slice the available_tasks in multiple lists.