Is there a way to run an arbitrary method whenever a new thread is started in Python (2.7)? My goal is to use setproctitle to set an appropriate title for each spawned thread.
Just inherit from threading.Thread and use this class instead of Thread - as long as you have control over the Threads.
import threading
class MyThread(threading.Thread):
def __init__(self, callable, *args, **kwargs):
super(MyThread, self).__init__(*args, **kwargs)
self._call_on_start = callable
def start(self):
self._call_on_start()
super(MyThread, self).start()
Just as a coarse sketch.
Edit
From the comments the need arose to kind of "inject" the new behavior into an existing application. Let's assume you have a script that itself imports other libraries. These libraries use the threading module:
Before importing any other modules, first execute this;
import threading
import time
class MyThread(threading.Thread):
_call_on_start = None
def __init__(self, callable_ = None, *args, **kwargs):
super(MyThread, self).__init__(*args, **kwargs)
if callable_ is not None:
self._call_on_start = callable_
def start(self):
if self._call_on_start is not None:
self._call_on_start
super(MyThread, self).start()
def set_thread_title():
print "Set thread title"
MyThread._call_on_start = set_thread_title()
threading.Thread = MyThread
def calculate_something():
time.sleep(5)
print sum(range(1000))
t = threading.Thread(target = calculate_something)
t.start()
time.sleep(2)
t.join()
As subsequent imports only do a lookup in sys.modules, all other libraries using this should be using our new class now. I regard this as a hack, and it might have strange side effects. But at least it's worth a try.
Please note: threading.Thread is not the only way to implement concurrency in python, there are other options like multiprocessing etc.. These will be unaffected here.
Edit 2
I just took a look at the library you cited and it's all about processes, not Threads! So, just do a :%s/threading/multiprocessing/g and :%s/Thread/Process/g and things should be fine.
Use threading.setprofile. You give it your callback and Python will invoke it every time a new thread starts.
Documentation here.
Related
I know this is an old question and have already found answers in other questions like this thread here. However, I have some problems applying it in my case.
The way I have things right now are the following: I have my MainWindow class where I can input some data. Then I have a Worker class which is a PySide2.QtCore.QThread object. To this class I pass some input data from the MainWindow. Inside this Worker class I have a method which sets up some ODEs, which in another method of the Worker class are being solved by scipy.integrate.solve_ivp. When the integration is done, I send the results via a signal back to the MainWindow. So the code roughly looks like this:
import PySide2
from scipy.integrate import solve_ivp
class Worker(QtCore.QThread):
def __init__(self,*args,**kwargs):
super(Worker,self).__init__()
"Here I collect input parameters"
def run(self):
"Here I call solve_ivp for the integration and send a signal with the
solution when it is done"
def ode_fun(self,t,c):
"Function where the ode equations are set up"
class Ui_MainWindow(QtWidgets.QMainWindow):
def __init__(self):
"set up the GUI"
self.btnStartSimulation.clicked.connect(self.start_simulation) #button to start the integration
def start_simulation(self):
self.watchthread(Worker)
self.thread.start()
def watchthread(self,worker):
self.thread = worker("input values")
"connect to signals from the thread"
Now I understand, that using the multiprocessing module I should be able to run the thread with the integration on another processor core to make it faster and make the GUI less laggy. However, from the link above I am not sure how I should apply this module or even how to restructure my code. Do I have to put the code that I now have in my Worker class into another class or am I somehow able to apply the multiprocessing module on my existing thread?
Any help is greatly appreciated!
Edit:
The new code looks like this:
class Worker(QtCore.QThread):
def __init__(self,*args,**kwargs):
super(Worker,self).__init__()
self.operation_parameters = args[0]
self.growth_parameters = args[1]
self.osmolality_parameters = args[2]
self.controller_parameters = args[3]
self.c_zero = args[4]
def run(self):
data = multiprocessing.Queue()
input_dict = {"function": self.ode_fun_vrabel_rushton_scaba_cont_co2_oxygen_biomass_metabol,
"time": [0, self.t_final],
"initial values": self.c_zero}
data.put(input_dict)
self.ode_process = multiprocessing.Process(target=self.multi_process_function, args=(data,))
self.ode_process.start()
self.solution = data.get()
def multi_process_function(self,data):
self.message_signal = True
input_dict = data.get()
solution = solve_ivp(input_dict["function"], input_dict["time"],
input_dict["initial values"], method="BDF")
data.put(solution)
def ode_fun(self,t,c):
"Function where the ode equations are set up"
(...) = self.operation_parameters
(...) = self.growth_parameters
(...) = self.osmolality_parameters
(...) = self.controller_parameters
Is it okay if I access the parameters in the ode_fun function via self."parameter_name"? Or do I also have to pass them with the data-parameter?
With the current code I receive the following error: TypeError: can't pickle Worker objects
You could call it from your worker like this:
import PySide2
from scipy.integrate import solve_ivp
import multiprocessing
class Worker(QtCore.QThread):
def __init__(self,*args,**kwargs):
super(Worker, self).__init__()
self.ode_process = None
"Here I collect input parameters"
def run(self):
"Here I call solve_ivp for the integration and send a signal with the solution when it is done"
data = multiprocessing.Queue()
data.put("all objects needed in the process, i would suggest a single dict from which you extract all data")
self.ode_process = multiprocessing.Process(target="your heavy duty function", args=(data,))
self.ode_process.start() # this is non blocking
# if you want it to block:
self.ode_process.join()
# make sure you remove all input data from the queue and fill it with the result, then to get it back:
results = data.get()
print(results) # or do with it what you want to do...
def ode_fun(self, t, c):
"Function where the ode equations are set up"
class Ui_MainWindow(QtWidgets.QMainWindow):
def __init__(self):
"set up the GUI"
self.btnStartSimulation.clicked.connect(self.start_simulation) #button to start the integration
def start_simulation(self):
self.watchthread(Worker)
self.thread.start()
def watchthread(self,worker):
self.thread = worker("input values")
"connect to signals from the thread"
Also beware that you would overwrite the running process now every time you press to start the simulation. You may want to use some sort of lock for that.
Background
I have a class in python that takes in a list of mutexes. It then sorts that list, and uses __enter__() and __exit__() to lock/unlock all of the mutexes in a specific order to prevent deadlocks.
The class currently saves us a lot of hassle with potential deadlocks, as we can just invoke it in an RAII style, i.e.:
self.lock = SuperLock(list_of_locks)
# Lock all mutexes.
with self.lock:
# Issue calls to all hardware protected by these locks.
Problem
We'd like to expose ways for this class to provide an RAII-style API so we can lock only half of the mutexes at once, when called in a certain way, i.e.:
self.lock = SuperLock(list_of_locks)
# Lock all mutexes.
with self.lock:
# Issue calls to all hardware protected by these locks.
# Lock the first half of the mutexes in SuperLock.list_of_locks
with self.lock.first_half_only:
# Issue calls to all hardware protected by these locks.
# Lock the second half of the mutexes in SuperLock.list_of_locks
with self.lock.second_half_only:
# Issue calls to all hardware protected by these locks.
Question
Is there a way to provide this type of functionality so I could invoke with self.lock.first_half_only or with self.lock_first_half_only() to provide a simple API to users? We'd like to keep all this functionality in a single class.
Thank you.
Yes, you can get this interface. The object that will be entered/exited in context of a with statement is the resolved attribute. So you can go ahead and define context managers as attributes of your context manager:
from contextlib import ExitStack # pip install contextlib2
from contextlib import contextmanager
#contextmanager
def lock(name):
print("entering lock {}".format(name))
yield
print("exiting lock {}".format(name))
#contextmanager
def many(contexts):
with ExitStack() as stack:
for cm in contexts:
stack.enter_context(cm)
yield
class SuperLock(object):
def __init__(self, list_of_locks):
self.list_of_locks = list_of_locks
def __enter__(self):
# implement for entering the `with self.lock:` use case
return self
def __exit__(self, exce_type, exc_value, traceback):
pass
#property
def first_half_only(self):
return many(self.list_of_locks[:4])
#property
def second_half_only(self):
# yo dawg, we herd you like with-statements
return many(self.list_of_locks[4:])
When you create and return a new context manager, you may use state from the instance (i.e. self).
Example usage:
>>> list_of_locks = [lock(i) for i in range(8)]
>>> super_lock = SuperLock(list_of_locks)
>>> with super_lock.first_half_only:
... print('indented')
...
entering lock 0
entering lock 1
entering lock 2
entering lock 3
indented
exiting lock 3
exiting lock 2
exiting lock 1
exiting lock 0
Edit: class based equivalent of the lock generator context manager shown above
class lock(object):
def __init__(self, name):
self.name = name
def __enter__(self):
print("entering lock {}".format(self.name))
return self
def __exit__(self, exce_type, exc_value, traceback):
print("exiting lock {}".format(self.name))
# If you want to handle the exception (if any), you may use the
# return value of this method to suppress re-raising error on exit
from contextlib import contextmanager
class A:
#contextmanager
def i_am_lock(self):
print("entering")
yield
print("leaving")
a = A()
with a.i_am_lock():
print("inside")
Output:
entering
inside
leaving
Futher you can use contextlib.ExitStack to manage your locks better.
I'd use a SimpleNamespace to allow attribute access to different SuperLock objects, e.g.:
from types import SimpleNamespace
self.lock = SimpleNamespace(
all=SuperLock(list_of_locks),
first_two_locks=SuperLock(list_of_locks[:2]),
other_locks=SuperLock(list_of_locks[2:])
)
with self.lock.all:
# Issue calls to all hardware protected by these locks.
with self.lock.first_two_locks:
# Issue calls to all hardware protected by these locks.
with self.lock.other_locks:
# Issue calls to all hardware protected by these locks.
Edit:
For python 2, you can use this class to achieve a similar behavior:
class SimpleNamespace:
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
New to SO, please forgive any etiquette errors (point out if there are!).
I'm working on a script that is running on the programs main UI thread. That being said, I need to avoid all blocking calls to ensure user can still interact. I do not have access to the UI event loop so any busy loop solutions aren't possible in my situation.
I have a simple background thread that is communicating to another app and gathering data, and storing in a simple array for consumption. Each time this data is updated I need to use the data to modify the UI (this must run in main thread). Ideally the background thread would emit a signal each time the data is updated then in the main thread a listener would handle this and modify the UI. A busy loop is not an option everything must be asyncronous/event based.
I have the data gathering continuossly running in the background using threading.timer(..). However since this runs in a seperate thread, the UI operations need to be called externally to this.
def _pollLoop(self):
if (self._isCubeControl):
self._getNewData()
#in a perfect world, updateUI() would be here
self._pollThread = threading.Timer(0.1,self._pollLoop)
self._pollThread.start()
I need a way for this pollLoop to callback to main thread so I can update the UI. I've tried direct callbacks within the pollLoop but the callback are ran within the seperate thread causing errors.
Looking for a way to attach listener to the data object so that on change updateUI() can be ran IN MAIN THREAD.
Thanks for any help you can offer! If this is at all vague please let me know
Update
Based off of #CAB's answer I'm now trying to implement an Observer Pattern. The difficulty is that the Observable is to be ran in a spawned thread while the Observer update function must run in the main thread. I've implemented the example chad lung (http://www.giantflyingsaucer.com/blog/?p=5117).
import threading
class Observable(object):
def __init__(self):
self.observers = []
def register(self, observer):
if not observer in self.observers:
self.observers.append(observer)
def unregister(self, observer):
if observer in self.observers:
self.observers.remove(observer)
def unregister_all(self):
if self.observers:
del self.observers[:]
def update_observers(self, *args, **kwargs):
for observer in self.observers:
observer.update(*args, **kwargs)
thread = threading.Timer(4,self.update_observers).start()
from abc import ABCMeta, abstractmethod
class Observer(object):
__metaclass__ = ABCMeta
#abstractmethod
def update(self, *args, **kwargs):
pass
class myObserver(Observer):
def update(self, *args, **kwargs):
'''update is called in the source thread context'''
print(str(threading.current_thread()))
observable = Observable()
observer = myObserver()
observable.register(observer)
observable.update_observers('Market Rally', something='Hello World')
What I get in response is:
<_MainThread(MainThread, started 140735179829248)>
<_Timer(Thread-1, started 123145306509312)>
<_Timer(Thread-2, started 123145310715904)>
So clearly the Observer is running in the spawned thread and not main. Anyone have another method for me? :) Once again I cannot have a busy loop to periodically check for value change (i wish.. :( ) This script is running overtop a UI and I do not have access to the GUI event loop, so everything needs to be asynchronous and non-blocking.
Let's build on that example from http://www.giantflyingsaucer.com/blog/?p=5117.
from abc import ABCMeta, abstractmethod
class Observer(object):
__metaclass__ = ABCMeta
#abstractmethod
def update(self, *args, **kwargs):
pass
The onus is then on the Observer implementation to disconnect the threads. Let's say we do this using a simplistic thread. (syntax might be off, I'm cramming this in and need to catch a bus).
from observer import Observer
from threading import Thread
class myObserver(Observer):
def update(self, *args, **kwargs):
'''update is called in the source thread context'''
Thread(target=self.handler, args=(self,*args), kwargs=**kwargs).start()
def handler(self, *args, **kwargs):
'''handler runs in an independent thread context'''
pass # do something useful with the args
I am trying to run multiple tasks in queue. The tasks come on user input. What i tried was creating a singleton class with ThreadPoolExecutor property and adding tasks into it. The tasks are added fine, but it looks like only the first addition of set of tasks works. The following are added but not executed.
class WebsiteTagScrapper:
class __WebsiteTagScrapper:
def __init__(self):
self.executor = ThreadPoolExecutor(max_workers=5)
instance = None
def __new__(cls): # __new__ always a classmethod
if not WebsiteTagScrapper.instance:
WebsiteTagScrapper.instance = WebsiteTagScrapper.__WebsiteTagScrapper()
return WebsiteTagScrapper.instance
I used multiprocess in one of my project without using celery, cause i think it was overkill for my use.
Maybe you could do something like this:
from multiprocessing import Process
class MyQueuProcess(Process):
def __init__(self):
super(MyQueuProcess, self).__init__()
self.tasks = []
def add_task(self, task):
self.tasks.append(task)
def run(self):
for task in self.tasks:
#Do your task
You just have to create an instance in your view, set up your task and then run(). Also if you need to access your database, you will need to import django in your child and then make a django.setup().
I'm working on a GUI application, developed in Python and its UI library : PySide2 (Qt wrapper for Python)
I have a heavy computation function I want to put on another thread in order to not freeze my UI. The Ui should show "Loading" and when the function is over, receive from it it's results and update the UI with it.
I've tried a lot of different codes, a lot of examples are working for others but not me, is it PySide2 fault ? (For example this is almost what I want to do : Updating GUI elements in MultiThreaded PyQT)
My code is :
class OtherThread(QThread):
def __init__(self):
QThread.__init__(self)
def run(self):
print 'Running......'
self.emit(SIGNAL("over(object)"), [(1,2,3), (2,3,4)])
#Slot(object)
def printHey( obj):
print 'Hey, I\'ve got an object ',
print obj
thr = OtherThread()
self.connect(thr,SIGNAL("over(object)"),printHey)
thr.start()
My code is working if I use primitives such as bool or int but not with object. I see 'Running....' but never the rest.
Hope someone can enlighten me
You can't define signals dynamically on a class instance. They have to be defined as class attributes. You should be using the new-style signals and slot syntax.
class OtherThread(QThread):
over = QtCore.Signal(object)
def run(self):
...
self.over.emit([(1,2,3), (2,3,4)])
class MyApp(QtCore.QObject)
def __init__(self):
super(MyApp, self).__init__()
self.thread = OtherThread(self)
self.thread.over.connect(self.on_over)
self.thread.start()
#QtCore.Slot(object)
def on_over(self, value):
print 'Thread Value', value