I know this is an old question and have already found answers in other questions like this thread here. However, I have some problems applying it in my case.
The way I have things right now are the following: I have my MainWindow class where I can input some data. Then I have a Worker class which is a PySide2.QtCore.QThread object. To this class I pass some input data from the MainWindow. Inside this Worker class I have a method which sets up some ODEs, which in another method of the Worker class are being solved by scipy.integrate.solve_ivp. When the integration is done, I send the results via a signal back to the MainWindow. So the code roughly looks like this:
import PySide2
from scipy.integrate import solve_ivp
class Worker(QtCore.QThread):
def __init__(self,*args,**kwargs):
super(Worker,self).__init__()
"Here I collect input parameters"
def run(self):
"Here I call solve_ivp for the integration and send a signal with the
solution when it is done"
def ode_fun(self,t,c):
"Function where the ode equations are set up"
class Ui_MainWindow(QtWidgets.QMainWindow):
def __init__(self):
"set up the GUI"
self.btnStartSimulation.clicked.connect(self.start_simulation) #button to start the integration
def start_simulation(self):
self.watchthread(Worker)
self.thread.start()
def watchthread(self,worker):
self.thread = worker("input values")
"connect to signals from the thread"
Now I understand, that using the multiprocessing module I should be able to run the thread with the integration on another processor core to make it faster and make the GUI less laggy. However, from the link above I am not sure how I should apply this module or even how to restructure my code. Do I have to put the code that I now have in my Worker class into another class or am I somehow able to apply the multiprocessing module on my existing thread?
Any help is greatly appreciated!
Edit:
The new code looks like this:
class Worker(QtCore.QThread):
def __init__(self,*args,**kwargs):
super(Worker,self).__init__()
self.operation_parameters = args[0]
self.growth_parameters = args[1]
self.osmolality_parameters = args[2]
self.controller_parameters = args[3]
self.c_zero = args[4]
def run(self):
data = multiprocessing.Queue()
input_dict = {"function": self.ode_fun_vrabel_rushton_scaba_cont_co2_oxygen_biomass_metabol,
"time": [0, self.t_final],
"initial values": self.c_zero}
data.put(input_dict)
self.ode_process = multiprocessing.Process(target=self.multi_process_function, args=(data,))
self.ode_process.start()
self.solution = data.get()
def multi_process_function(self,data):
self.message_signal = True
input_dict = data.get()
solution = solve_ivp(input_dict["function"], input_dict["time"],
input_dict["initial values"], method="BDF")
data.put(solution)
def ode_fun(self,t,c):
"Function where the ode equations are set up"
(...) = self.operation_parameters
(...) = self.growth_parameters
(...) = self.osmolality_parameters
(...) = self.controller_parameters
Is it okay if I access the parameters in the ode_fun function via self."parameter_name"? Or do I also have to pass them with the data-parameter?
With the current code I receive the following error: TypeError: can't pickle Worker objects
You could call it from your worker like this:
import PySide2
from scipy.integrate import solve_ivp
import multiprocessing
class Worker(QtCore.QThread):
def __init__(self,*args,**kwargs):
super(Worker, self).__init__()
self.ode_process = None
"Here I collect input parameters"
def run(self):
"Here I call solve_ivp for the integration and send a signal with the solution when it is done"
data = multiprocessing.Queue()
data.put("all objects needed in the process, i would suggest a single dict from which you extract all data")
self.ode_process = multiprocessing.Process(target="your heavy duty function", args=(data,))
self.ode_process.start() # this is non blocking
# if you want it to block:
self.ode_process.join()
# make sure you remove all input data from the queue and fill it with the result, then to get it back:
results = data.get()
print(results) # or do with it what you want to do...
def ode_fun(self, t, c):
"Function where the ode equations are set up"
class Ui_MainWindow(QtWidgets.QMainWindow):
def __init__(self):
"set up the GUI"
self.btnStartSimulation.clicked.connect(self.start_simulation) #button to start the integration
def start_simulation(self):
self.watchthread(Worker)
self.thread.start()
def watchthread(self,worker):
self.thread = worker("input values")
"connect to signals from the thread"
Also beware that you would overwrite the running process now every time you press to start the simulation. You may want to use some sort of lock for that.
Related
I'm aware of this structure
class MyThread(QThread):
def __init__(self):
super().__init__()
def run():
# do stuff
t = MyThread()
t.start()
With regular threading.Thread you can do something like this:
def stuff():
# do stuff
t = threading.Thread(target=stuff)
t.start()
Any way to do this in pyqt5 with QThreads? Something like this:
t = Qthread(target=stuff)
t.start()
I tried that but I got this error:
TypeError: 'target' is an unknown keyword argument
You can add the function to a custom argument in the __init__, create an instance attribute for its reference and then run it in the run.
class MyThread(QThread):
def __init__(self, target=None):
super().__init__()
self.target = target
def run():
if self.target:
self.target()
def stuff():
# do something
t = MyThread(target=stuff)
t.start()
Be aware that access to UI elements is not allowed in external threads, so don't use the threaded function to do anything related to UI: reading values and properties is unreliable, and writing can cause your program to crash.
I'm new to threading and python. I would like to understand how to pass multiple arguments from one class to another class in python using threading.
I'm using a main thread to call a class- Process then inside the run I'm doing some business logic and calling another class- build using thread and passing multiple arguments.
The run of build class is getting executed but Inside the build class, I'm unable to access those arguments and hence not able to proceed further.
Not sure if my approach is right? Any suggestions will be appreciated.
Below is my main class :
from threading import Thread
import logging as log
from process import Process
if __name__ == '__main__':
try:
proc = Process()
proc.start()
except Exception as e:
#log some error
Inside Process:
#all the dependencies are imported
class Process(Thread):
'''
classdocs
'''
def __init__(self):
'''
Constructor
'''
Thread.__init__(self)
#other intializations
def run(self):
#some other logic
self.notification(pass_some_data)
#inside notification I'm calling another thread
def notification(self,passed_data):
#passed data is converted dict1
#tup1 is being formed from another function.
#build is a class, and if i don't pass None, i get groupname error.
th = build(None,(tup1,),(dict1,))
th.start()
#inside build
class build(Thread):
def _init_(self,tup1,dict1):
super(build,self).__init__(self)
self.tup1 = tup1
self.dict1 = dict1
def run(self):
#some business logic
#I'm unable to get the arguments being passed here.
I am a bit frustrated about not being able to solve this seemingly simple problem:
I have a function that takes some time to load data:
def import_data(id):
time.sleep(5)
return 'data' + str(id)
A DataModel class calls this function and manages two datasets.
class DataModel():
def __init__(self):
self._data_1 = import_data(1)
self._data_2 = import_data(2)
def retrieve_data_1(self):
return self._data_1
def retrieve_data_2(self):
return self._data_2
Now, the main UI creates the DataModel, calling both import_data functions, which blocks it.
def main_ui():
# This takes 5 seconds for each dataset and blocks the main UI thread
dm = DataModel()
# Other stuff is happening. This time could be used to load data in the background
time.sleep(2)
# Retrieve the first dataset
data_1 = dm.retrieve_data_1()
# User interaction. This time could be used to load even larger datasets
time.sleep(10)
# Retrieve the second dataset
data_2 = dm.retrieve_data_2()
I want the datasets to be loaded in the background to reduce the time the UI is blocked.
My idea would be to implement it like this pseudocode:
class DataModel():
def __init__(self):
self._data_1 = Thread(import_data(1)).start()
self._data_2 = Thread(import_data(2)).start()
def retrieve_data_1(self):
return self._data_1.wait_for_result()
def retrieve_data_2(self):
return self._data_2.wait_for_result()
The import_data functions are called in separate threads and return Future objects.
The retrieve_data functions either block the main thread waiting for the Future to evaluate or return its result instantly.
Is there an easy way to implement this in Python 3.x with threading and/or asyncio? Thanks in advance!
(Edit: syntax correction)
Use the concurrent.futures module which is designed exactly for that kind of usage:
_pool = concurrent.futures.ThreadPoolExecutor()
class DataModel():
def __init__(self):
self._data_1 = _pool.submit(import_data, 1)
self._data_2 = _pool.submit(import_data, 2)
def retrieve_data_1(self):
return self._data_1.result()
def retrieve_data_2(self):
return self._data_2.result()
If your functions are global, and your data serializable, you can even seamlessly switch from ThreadPoolExecutor to ProcessPoolExecutor and benefit from true (process-based) parallelism.
I have the following code:
class Functions(QObject):
mysig = Signal(filename)
def __init__(self, parent=None):
super(Functions, self).__init__(parent)
self.result = None
def showDialog(self, filename):
self.mysig.emit(filename)
def grabResult(self):
while not self.result:
time.sleep(5)
return result #this is the question
def setResult(self, result):
self.result = result
The other part of the code has this:
class Dialog(QDialog):
anotherSig = Signal(str)
fun = Functions()
def __init__(self, parent=None, filename=filename):
self.filename = filename
#Here it displays a picture based on the filename parameter
def okButtonClicked(self):
text = self.lineedit.text()
fun.setResult(text)
#Tried also this:
self.anotherSig.emit(text)
The Functions() class is called from a worker QThread (not shown here).
I guess my question is this: how do I tell my Functions class that the user has entered the the text and clicked the OK button? I tried connecting that anotherSig Signal, but when I try to do so, Qt complains about QPixmaps not being safe to be set from a different thread, and it doesn't work.
The method that I am using here "works", but I feel it's not very reliable. Plus, it only works when all of the relevant methods in the Functions class are #classmethod - this way, for some reason, it doesn't work. The setResult is called (I added a print statement to make sure), but the grabResult still shows self.result as None.
This code is not working because the call to showDialog is happening on the instantiation of a Functions object that is an attribute of what ever object is off on the other thread. Your fun in Dialog, which you set the result on, is a different instantiation.
To move the results back to the original Functions object I think you need to connect anotherSig of the Dialog object to the setResult function on the Functions object you want to get the results back.
Does something like this work (hard to test this with out a good bit of boiler plate).
class Functions(QObject):
mysig = Signal(filename,Functions)
def __init__(self, parent=None):
super(Functions, self).__init__(parent)
self.result = None
def showDialog(self, filename):
self.mysig.emit(filename,self)
def grabResult(self):
while not self.result:
time.sleep(5)
return result #this is the question
#QtCore.Slot(str)
def setResult(self, result):
self.result = result
def connection_fun(filename,fun):
d = Dialog(filename)
# what ever else you do in here
d.anotherSig.connect(fun.setResult))
Using time.sleep causes your application to freeze. One method for making your class wait is using QEventLoop like this:
loop = QEventLoop()
myDialog.mySignal.connect(loop.quit)
loop.exec_()
I'm trying to share a composite structure through a multiprocessing manager but I felt in trouble with a "RuntimeError: maximum recursion depth exceeded" when trying to use just one of the Composite class methods.
The class is token from code.activestate and tested by me before inclusion into the manager.
When retrieving the class into a process and invoking its addChild() method I kept the RunTimeError, while outside the process it works.
The composite class inheritates from a SpecialDict class, that implements a ** ____getattr()____ **
method.
Could be possible that while calling addChild() the interpreter of python looks for a different ** ____getattr()____ ** because the right one is not proxied by the manager?
If so It's not clear to me the right way to make a proxy to that class/method
The following code reproduce exactly this condition:
1) this is the manager.py:
from multiprocessing.managers import BaseManager
from CompositeDict import *
class PlantPurchaser():
def __init__(self):
self.comp = CompositeDict('Comp')
def get_cp(self):
return self.comp
class Manager():
def __init__(self):
self.comp = QueuePurchaser().get_cp()
BaseManager.register('get_comp', callable=lambda:self.comp)
self.m = BaseManager(address=('127.0.0.1', 50000), authkey='abracadabra')
self.s = self.m.get_server()
self.s.serve_forever()
2) I want to use the composite into this consumer.py:
from multiprocessing.managers import BaseManager
class Consumer():
def __init__(self):
BaseManager.register('get_comp')
self.m = BaseManager(address=('127.0.0.1', 50000), authkey='abracadabra')
self.m.connect()
self.comp = self.m.get_comp()
ret = self.comp.addChild('consumer')
3) run all launching by a controller.py:
from multiprocessing import Process
class Controller():
def __init__(self):
for child in _run_children():
child.join()
def _run_children():
from manager import Manager
from consumer import Consumer as Consumer
procs = (
Process(target=Manager, name='Manager' ),
Process(target=Consumer, name='Consumer'),
)
for proc in procs:
proc.daemon = 1
proc.start()
return procs
c = Controller()
Take a look this related questions on how to do a proxy for CompositeDict() class
as suggested by AlberT.
The solution given by tgray works but cannot avoid race conditions
Is it possible there is a circular reference between the classes? For example, the outer class has a reference to the composite class, and the composite class has a reference back to the outer class.
The multiprocessing manager works well, but when you have large, complicated class structures, then you are likely to run into an error where a type/reference can not be serialized correctly. The other problem is that errors from multiprocessing manager are very cryptic. This makes debugging failure conditions even more difficult.
I think the problem is that you have to instruct the Manager on how to manage you object, which is not a standard python type.
In other worlds you have to create a proxy for you CompositeDict
You could look at this doc for an example: http://ruffus.googlecode.com/svn/trunk/doc/html/sharing_data_across_jobs_example.html
Python has a default maximum recursion depth of 1000 (or 999, I forget...). But you can change the default behavior thusly:
import sys
sys.setrecursionlimit(n)
Where n is the number of recursions you wish to allow.
Edit:
The above answer does nothing to solve the root cause of this problem (as pointed out in the comments). It only needs to be used if you are intentionally recursing more than 1000 times. If you are in an infinite loop (like in this problem), you will eventually hit whatever limit you set.
To address your actual problem, I re-wrote your code from scratch starting as simply as I could make it and built it up to what I believe is what you want:
import sys
from multiprocessing import Process
from multiprocessing.managers import BaseManager
from CompositDict import *
class Shared():
def __init__(self):
self.comp = CompositeDict('Comp')
def get_comp(self):
return self.comp
def set_comp(self, c):
self.comp = c
class Manager():
def __init__(self):
shared = Shared()
BaseManager.register('get_shared', callable=lambda:shared)
mgr = BaseManager(address=('127.0.0.1', 50000), authkey='abracadabra')
srv = mgr.get_server()
srv.serve_forever()
class Consumer():
def __init__(self, child_name):
BaseManager.register('get_shared')
mgr = BaseManager(address=('127.0.0.1', 50000), authkey='abracadabra')
mgr.connect()
shared = mgr.get_shared()
comp = shared.get_comp()
child = comp.addChild(child_name)
shared.set_comp(comp)
print comp
class Controller():
def __init__(self):
pass
def main(self):
m = Process(target=Manager, name='Manager')
m.daemon = True
m.start()
consumers = []
for i in xrange(3):
p = Process(target=Consumer, name='Consumer', args=('Consumer_' + str(i),))
p.daemon = True
consumers.append(p)
for c in consumers:
c.start()
for c in consumers:
c.join()
return 0
if __name__ == '__main__':
con = Controller()
sys.exit(con.main())
I did this all in one file, but you shouldn't have any trouble breaking it up.
I added a child_name argument to your consumer so that I could check that the CompositDict was getting updated.
Note that there is both a getter and a setter for your CompositDict object. When I only had a getter, each Consumer was overwriting the CompositDict when it added a child.
This is why I also changed your registered method to get_shared instead of get_comp, as you will want access to the setter as well as the getter within your Consumer class.
Also, I don't think you want to try joining your manager process, as it will "serve forever". If you look at the source for the BaseManager (./Lib/multiprocessing/managers.py:Line 144) you'll notice that the serve_forever() function puts you into an infinite loop that is only broken by KeyboardInterrupt or SystemExit.
Bottom line is that this code works without any recursive looping (as far as I can tell), but let me know if you still experience your error.