Maximum number of running instances reached (1) - apscheduler - python

I'm using apscheduler to recursively run a function in my application. The basic source of this functionality is as follows:
class ClassName(QtGui.QWidget)
def __init__(self):
super(ClassName, self).__init__()
from apscheduler.scheduler import Scheduler
sched = Scheduler(standalone=True)
sched.daemonic = False
sched.add_cron_job(self.FunctionName, second='*/5')
def FunctionName(self):
print("Hello World!")
#Function contents here
if __name__ == '__main__':
import sys
app = QtGui.QApplication(sys.argv)
ClassName = ClassName()
sys.exit(app.exec_())
My understanding from reading this thread, is that if the called function hasn't completed when the next job is started, the "Maximum number of running instances reached" error can be raised. I understand the theory behind the solution in that thread, but am unsure on the application.
Should I explicitly state, at the end of the called function, the terminate/kill a process to ensure that the original thread is removed before the next thread is started?

Related

Python socketio with multiprocessing

So I have been struggling with this one error of pickle which is driving me crazy. I have the following master Engine class with the following code :
import eventlet
import socketio
import multiprocessing
from multiprocessing import Queue
from multi import SIOSerever
class masterEngine:
if __name__ == '__main__':
serverObj = SIOSerever()
try:
receiveData = multiprocessing.Process(target=serverObj.run)
receiveData.start()
receiveProcess = multiprocessing.Process(target=serverObj.fetchFromQueue)
receiveProcess.start()
receiveData.join()
receiveProcess.join()
except Exception as error:
print(error)
and I have another file called multi which runs like the following :
import multiprocessing
from multiprocessing import Queue
import eventlet
import socketio
class SIOSerever:
def __init__(self):
self.cycletimeQueue = Queue()
self.sio = socketio.Server(cors_allowed_origins='*',logger=False)
self.app = socketio.WSGIApp(self.sio, static_files={'/': 'index.html',})
self.ws_server = eventlet.listen(('0.0.0.0', 5000))
#self.sio.on('production')
def p_message(sid, message):
self.cycletimeQueue.put(message)
print("I logged : "+str(message))
def run(self):
eventlet.wsgi.server(self.ws_server, self.app)
def fetchFromQueue(self):
while True:
cycle = self.cycletimeQueue.get()
print(cycle)
As you can see I can trying to create two processes of def run and fetchFromQueue which i want to run independently.
My run function starts the python-socket server to which im sending some data from a html web page ( This runs perfectly without multiprocessing). I am then trying to push the data received to a Queue so that my other function can retrieve it and play with the data received.
I have a set of time taking operations that I need to carry out with the data received from the socket which is why im pushing it all into a Queue.
On running the master Engine class I receive the following :
Can't pickle <class 'threading.Thread'>: it's not the same object as threading.Thread
I ended!
[Finished in 0.5s]
Can you please help with what I am doing wrong?
From multiprocessing programming guidelines:
Explicitly pass resources to child processes
On Unix using the fork start method, a child process can make use of a shared resource created in a parent process using a global resource. However, it is better to pass the object as an argument to the constructor for the child process.
Apart from making the code (potentially) compatible with Windows and the other start methods this also ensures that as long as the child process is still alive the object will not be garbage collected in the parent process. This might be important if some resource is freed when the object is garbage collected in the parent process.
Therefore, I slightly modified your example by removing everything unnecessary, but showing an approach where the shared queue is explicitly passed to all processes that use it:
import multiprocessing
MAX = 5
class SIOSerever:
def __init__(self, queue):
self.cycletimeQueue = queue
def run(self):
for i in range(MAX):
self.cycletimeQueue.put(i)
#staticmethod
def fetchFromQueue(cycletimeQueue):
while True:
cycle = cycletimeQueue.get()
print(cycle)
if cycle >= MAX - 1:
break
def start_server(queue):
server = SIOSerever(queue)
server.run()
if __name__ == '__main__':
try:
queue = multiprocessing.Queue()
receiveData = multiprocessing.Process(target=start_server, args=(queue,))
receiveData.start()
receiveProcess = multiprocessing.Process(target=SIOSerever.fetchFromQueue, args=(queue,))
receiveProcess.start()
receiveData.join()
receiveProcess.join()
except Exception as error:
print(error)
0
1
...

my code window doesnt responding when its calculating stuffs [duplicate]

I have a program which interfaces with a radio I am using via a gui I wrote in PyQt. Obviously one of the main functions of the radio is to transmit data, but to do this continuously, I have to loop the writes, which causes the gui to hang. Since I have never dealt with threading, I tried to get rid of these hangs using QCoreApplication.processEvents(). The radio needs to sleep between transmissions, though, so the gui still hangs based on how long these sleeps last.
Is there a simple way to fix this using QThread? I have looked for tutorials on how to implement multithreading with PyQt, but most of them deal with setting up servers and are much more advanced than I need them to be. I honestly don't even really need my thread to update anything while it is running, I just need to start it, have it transmit in the background, and stop it.
I created a little example that shows 3 different and simple ways of dealing with threads. I hope it will help you find the right approach to your problem.
import sys
import time
from PyQt5.QtCore import (QCoreApplication, QObject, QRunnable, QThread,
QThreadPool, pyqtSignal)
# Subclassing QThread
# http://qt-project.org/doc/latest/qthread.html
class AThread(QThread):
def run(self):
count = 0
while count < 5:
time.sleep(1)
print("A Increasing")
count += 1
# Subclassing QObject and using moveToThread
# http://blog.qt.digia.com/blog/2007/07/05/qthreads-no-longer-abstract
class SomeObject(QObject):
finished = pyqtSignal()
def long_running(self):
count = 0
while count < 5:
time.sleep(1)
print("B Increasing")
count += 1
self.finished.emit()
# Using a QRunnable
# http://qt-project.org/doc/latest/qthreadpool.html
# Note that a QRunnable isn't a subclass of QObject and therefore does
# not provide signals and slots.
class Runnable(QRunnable):
def run(self):
count = 0
app = QCoreApplication.instance()
while count < 5:
print("C Increasing")
time.sleep(1)
count += 1
app.quit()
def using_q_thread():
app = QCoreApplication([])
thread = AThread()
thread.finished.connect(app.exit)
thread.start()
sys.exit(app.exec_())
def using_move_to_thread():
app = QCoreApplication([])
objThread = QThread()
obj = SomeObject()
obj.moveToThread(objThread)
obj.finished.connect(objThread.quit)
objThread.started.connect(obj.long_running)
objThread.finished.connect(app.exit)
objThread.start()
sys.exit(app.exec_())
def using_q_runnable():
app = QCoreApplication([])
runnable = Runnable()
QThreadPool.globalInstance().start(runnable)
sys.exit(app.exec_())
if __name__ == "__main__":
#using_q_thread()
#using_move_to_thread()
using_q_runnable()
Take this answer updated for PyQt5, python 3.4
Use this as a pattern to start a worker that does not take data and return data as they are available to the form.
1 - Worker class is made smaller and put in its own file worker.py for easy memorization and independent software reuse.
2 - The main.py file is the file that defines the GUI Form class
3 - The thread object is not subclassed.
4 - Both thread object and the worker object belong to the Form object
5 - Steps of the procedure are within the comments.
# worker.py
from PyQt5.QtCore import QThread, QObject, pyqtSignal, pyqtSlot
import time
class Worker(QObject):
finished = pyqtSignal()
intReady = pyqtSignal(int)
#pyqtSlot()
def procCounter(self): # A slot takes no params
for i in range(1, 100):
time.sleep(1)
self.intReady.emit(i)
self.finished.emit()
And the main file is:
# main.py
from PyQt5.QtCore import QThread
from PyQt5.QtWidgets import QApplication, QLabel, QWidget, QGridLayout
import sys
import worker
class Form(QWidget):
def __init__(self):
super().__init__()
self.label = QLabel("0")
# 1 - create Worker and Thread inside the Form
self.obj = worker.Worker() # no parent!
self.thread = QThread() # no parent!
# 2 - Connect Worker`s Signals to Form method slots to post data.
self.obj.intReady.connect(self.onIntReady)
# 3 - Move the Worker object to the Thread object
self.obj.moveToThread(self.thread)
# 4 - Connect Worker Signals to the Thread slots
self.obj.finished.connect(self.thread.quit)
# 5 - Connect Thread started signal to Worker operational slot method
self.thread.started.connect(self.obj.procCounter)
# * - Thread finished signal will close the app if you want!
#self.thread.finished.connect(app.exit)
# 6 - Start the thread
self.thread.start()
# 7 - Start the form
self.initUI()
def initUI(self):
grid = QGridLayout()
self.setLayout(grid)
grid.addWidget(self.label,0,0)
self.move(300, 150)
self.setWindowTitle('thread test')
self.show()
def onIntReady(self, i):
self.label.setText("{}".format(i))
#print(i)
app = QApplication(sys.argv)
form = Form()
sys.exit(app.exec_())
According to the Qt developers, subclassing QThread is incorrect (see http://blog.qt.io/blog/2010/06/17/youre-doing-it-wrong/). But that article is really hard to understand (plus the title is a bit condescending). I found a better blog post that gives a more detailed explanation about why you should use one style of threading over another: http://mayaposch.wordpress.com/2011/11/01/how-to-really-truly-use-qthreads-the-full-explanation/
Also, I would highly recommend this video from KDAB on signals and slots between threads.
In my opinion, you should probably never subclass thread with the intent to overload the run method. While that does work, you're basically circumventing how Qt wants you to work. Plus you'll miss out on things like events and proper thread safe signals and slots. Plus as you'll likely see in the above blog post, the "correct" way of threading forces you to write more testable code.
Here's a couple of examples of how to take advantage of QThreads in PyQt (I posted a separate answer below that properly uses QRunnable and incorporates signals/slots, that answer is better if you have a lot of async tasks that you need to load balance).
import sys
from PyQt4 import QtCore
from PyQt4 import QtGui
from PyQt4.QtCore import Qt
# very testable class (hint: you can use mock.Mock for the signals)
class Worker(QtCore.QObject):
finished = QtCore.pyqtSignal()
dataReady = QtCore.pyqtSignal(list, dict)
#QtCore.pyqtSlot()
def processA(self):
print "Worker.processA()"
self.finished.emit()
#QtCore.pyqtSlot(str, list, list)
def processB(self, foo, bar=None, baz=None):
print "Worker.processB()"
for thing in bar:
# lots of processing...
self.dataReady.emit(['dummy', 'data'], {'dummy': ['data']})
self.finished.emit()
class Thread(QtCore.QThread):
"""Need for PyQt4 <= 4.6 only"""
def __init__(self, parent=None):
QtCore.QThread.__init__(self, parent)
# this class is solely needed for these two methods, there
# appears to be a bug in PyQt 4.6 that requires you to
# explicitly call run and start from the subclass in order
# to get the thread to actually start an event loop
def start(self):
QtCore.QThread.start(self)
def run(self):
QtCore.QThread.run(self)
app = QtGui.QApplication(sys.argv)
thread = Thread() # no parent!
obj = Worker() # no parent!
obj.moveToThread(thread)
# if you want the thread to stop after the worker is done
# you can always call thread.start() again later
obj.finished.connect(thread.quit)
# one way to do it is to start processing as soon as the thread starts
# this is okay in some cases... but makes it harder to send data to
# the worker object from the main gui thread. As you can see I'm calling
# processA() which takes no arguments
thread.started.connect(obj.processA)
thread.start()
# another way to do it, which is a bit fancier, allows you to talk back and
# forth with the object in a thread safe way by communicating through signals
# and slots (now that the thread is running I can start calling methods on
# the worker object)
QtCore.QMetaObject.invokeMethod(obj, 'processB', Qt.QueuedConnection,
QtCore.Q_ARG(str, "Hello World!"),
QtCore.Q_ARG(list, ["args", 0, 1]),
QtCore.Q_ARG(list, []))
# that looks a bit scary, but its a totally ok thing to do in Qt,
# we're simply using the system that Signals and Slots are built on top of,
# the QMetaObject, to make it act like we safely emitted a signal for
# the worker thread to pick up when its event loop resumes (so if its doing
# a bunch of work you can call this method 10 times and it will just queue
# up the calls. Note: PyQt > 4.6 will not allow you to pass in a None
# instead of an empty list, it has stricter type checking
app.exec_()
# Without this you may get weird QThread messages in the shell on exit
app.deleteLater()
Very nice example from Matt, I fixed the typo and also pyqt4.8 is common now so I removed the dummy class as well and added an example for the dataReady signal
# -*- coding: utf-8 -*-
import sys
from PyQt4 import QtCore, QtGui
from PyQt4.QtCore import Qt
# very testable class (hint: you can use mock.Mock for the signals)
class Worker(QtCore.QObject):
finished = QtCore.pyqtSignal()
dataReady = QtCore.pyqtSignal(list, dict)
#QtCore.pyqtSlot()
def processA(self):
print "Worker.processA()"
self.finished.emit()
#QtCore.pyqtSlot(str, list, list)
def processB(self, foo, bar=None, baz=None):
print "Worker.processB()"
for thing in bar:
# lots of processing...
self.dataReady.emit(['dummy', 'data'], {'dummy': ['data']})
self.finished.emit()
def onDataReady(aList, aDict):
print 'onDataReady'
print repr(aList)
print repr(aDict)
app = QtGui.QApplication(sys.argv)
thread = QtCore.QThread() # no parent!
obj = Worker() # no parent!
obj.dataReady.connect(onDataReady)
obj.moveToThread(thread)
# if you want the thread to stop after the worker is done
# you can always call thread.start() again later
obj.finished.connect(thread.quit)
# one way to do it is to start processing as soon as the thread starts
# this is okay in some cases... but makes it harder to send data to
# the worker object from the main gui thread. As you can see I'm calling
# processA() which takes no arguments
thread.started.connect(obj.processA)
thread.finished.connect(app.exit)
thread.start()
# another way to do it, which is a bit fancier, allows you to talk back and
# forth with the object in a thread safe way by communicating through signals
# and slots (now that the thread is running I can start calling methods on
# the worker object)
QtCore.QMetaObject.invokeMethod(obj, 'processB', Qt.QueuedConnection,
QtCore.Q_ARG(str, "Hello World!"),
QtCore.Q_ARG(list, ["args", 0, 1]),
QtCore.Q_ARG(list, []))
# that looks a bit scary, but its a totally ok thing to do in Qt,
# we're simply using the system that Signals and Slots are built on top of,
# the QMetaObject, to make it act like we safely emitted a signal for
# the worker thread to pick up when its event loop resumes (so if its doing
# a bunch of work you can call this method 10 times and it will just queue
# up the calls. Note: PyQt > 4.6 will not allow you to pass in a None
# instead of an empty list, it has stricter type checking
app.exec_()
In PyQt there are a lot of options for getting asynchronous behavior. For things that need event processing (ie. QtNetwork, etc) you should use the QThread example I provided in my other answer on this thread. But for the vast majority of your threading needs, I think this solution is far superior than the other methods.
The advantage of this is that the QThreadPool schedules your QRunnable instances as tasks. This is similar to the task pattern used in Intel's TBB. It's not quite as elegant as I like but it does pull off excellent asynchronous behavior.
This allows you to utilize most of the threading power of Qt in Python via QRunnable and still take advantage of signals and slots. I use this same code in several applications, some that make hundreds of asynchronous REST calls, some that open files or list directories, and the best part is using this method, Qt task balances the system resources for me.
import time
from PyQt4 import QtCore
from PyQt4 import QtGui
from PyQt4.QtCore import Qt
def async(method, args, uid, readycb, errorcb=None):
"""
Asynchronously runs a task
:param func method: the method to run in a thread
:param object uid: a unique identifier for this task (used for verification)
:param slot updatecb: the callback when data is receieved cb(uid, data)
:param slot errorcb: the callback when there is an error cb(uid, errmsg)
The uid option is useful when the calling code makes multiple async calls
and the callbacks need some context about what was sent to the async method.
For example, if you use this method to thread a long running database call
and the user decides they want to cancel it and start a different one, the
first one may complete before you have a chance to cancel the task. In that
case, the "readycb" will be called with the cancelled task's data. The uid
can be used to differentiate those two calls (ie. using the sql query).
:returns: Request instance
"""
request = Request(method, args, uid, readycb, errorcb)
QtCore.QThreadPool.globalInstance().start(request)
return request
class Request(QtCore.QRunnable):
"""
A Qt object that represents an asynchronous task
:param func method: the method to call
:param list args: list of arguments to pass to method
:param object uid: a unique identifier (used for verification)
:param slot readycb: the callback used when data is receieved
:param slot errorcb: the callback used when there is an error
The uid param is sent to your error and update callbacks as the
first argument. It's there to verify the data you're returning
After created it should be used by invoking:
.. code-block:: python
task = Request(...)
QtCore.QThreadPool.globalInstance().start(task)
"""
INSTANCES = []
FINISHED = []
def __init__(self, method, args, uid, readycb, errorcb=None):
super(Request, self).__init__()
self.setAutoDelete(True)
self.cancelled = False
self.method = method
self.args = args
self.uid = uid
self.dataReady = readycb
self.dataError = errorcb
Request.INSTANCES.append(self)
# release all of the finished tasks
Request.FINISHED = []
def run(self):
"""
Method automatically called by Qt when the runnable is ready to run.
This will run in a separate thread.
"""
# this allows us to "cancel" queued tasks if needed, should be done
# on shutdown to prevent the app from hanging
if self.cancelled:
self.cleanup()
return
# runs in a separate thread, for proper async signal/slot behavior
# the object that emits the signals must be created in this thread.
# Its not possible to run grabber.moveToThread(QThread.currentThread())
# so to get this QObject to properly exhibit asynchronous
# signal and slot behavior it needs to live in the thread that
# we're running in, creating the object from within this thread
# is an easy way to do that.
grabber = Requester()
grabber.Loaded.connect(self.dataReady, Qt.QueuedConnection)
if self.dataError is not None:
grabber.Error.connect(self.dataError, Qt.QueuedConnection)
try:
result = self.method(*self.args)
if self.cancelled:
# cleanup happens in 'finally' statement
return
grabber.Loaded.emit(self.uid, result)
except Exception as error:
if self.cancelled:
# cleanup happens in 'finally' statement
return
grabber.Error.emit(self.uid, unicode(error))
finally:
# this will run even if one of the above return statements
# is executed inside of the try/except statement see:
# https://docs.python.org/2.7/tutorial/errors.html#defining-clean-up-actions
self.cleanup(grabber)
def cleanup(self, grabber=None):
# remove references to any object or method for proper ref counting
self.method = None
self.args = None
self.uid = None
self.dataReady = None
self.dataError = None
if grabber is not None:
grabber.deleteLater()
# make sure this python obj gets cleaned up
self.remove()
def remove(self):
try:
Request.INSTANCES.remove(self)
# when the next request is created, it will clean this one up
# this will help us avoid this object being cleaned up
# when it's still being used
Request.FINISHED.append(self)
except ValueError:
# there might be a race condition on shutdown, when shutdown()
# is called while the thread is still running and the instance
# has already been removed from the list
return
#staticmethod
def shutdown():
for inst in Request.INSTANCES:
inst.cancelled = True
Request.INSTANCES = []
Request.FINISHED = []
class Requester(QtCore.QObject):
"""
A simple object designed to be used in a separate thread to allow
for asynchronous data fetching
"""
#
# Signals
#
Error = QtCore.pyqtSignal(object, unicode)
"""
Emitted if the fetch fails for any reason
:param unicode uid: an id to identify this request
:param unicode error: the error message
"""
Loaded = QtCore.pyqtSignal(object, object)
"""
Emitted whenever data comes back successfully
:param unicode uid: an id to identify this request
:param list data: the json list returned from the GET
"""
NetworkConnectionError = QtCore.pyqtSignal(unicode)
"""
Emitted when the task fails due to a network connection error
:param unicode message: network connection error message
"""
def __init__(self, parent=None):
super(Requester, self).__init__(parent)
class ExampleObject(QtCore.QObject):
def __init__(self, parent=None):
super(ExampleObject, self).__init__(parent)
self.uid = 0
self.request = None
def ready_callback(self, uid, result):
if uid != self.uid:
return
print "Data ready from %s: %s" % (uid, result)
def error_callback(self, uid, error):
if uid != self.uid:
return
print "Data error from %s: %s" % (uid, error)
def fetch(self):
if self.request is not None:
# cancel any pending requests
self.request.cancelled = True
self.request = None
self.uid += 1
self.request = async(slow_method, ["arg1", "arg2"], self.uid,
self.ready_callback,
self.error_callback)
def slow_method(arg1, arg2):
print "Starting slow method"
time.sleep(1)
return arg1 + arg2
if __name__ == "__main__":
import sys
app = QtGui.QApplication(sys.argv)
obj = ExampleObject()
dialog = QtGui.QDialog()
layout = QtGui.QVBoxLayout(dialog)
button = QtGui.QPushButton("Generate", dialog)
progress = QtGui.QProgressBar(dialog)
progress.setRange(0, 0)
layout.addWidget(button)
layout.addWidget(progress)
button.clicked.connect(obj.fetch)
dialog.show()
app.exec_()
app.deleteLater() # avoids some QThread messages in the shell on exit
# cancel all running tasks avoid QThread/QTimer error messages
# on exit
Request.shutdown()
When exiting the application you'll want to make sure you cancel all of the tasks or the application will hang until every scheduled task has completed
Based on the Worker objects methods mentioned in other answers, I decided to see if I could expand on the solution to invoke more threads - in this case the optimal number the machine can run and spin up multiple workers with indeterminate completion times.
To do this I still need to subclass QThread - but only to assign a thread number and to 'reimplement' the signals 'finished' and 'started' to include their thread number.
I've focused quite a bit on the signals between the main gui, the threads, and the workers.
Similarly, others answers have been a pains to point out not parenting the QThread but I don't think this is a real concern. However, my code also is careful to destroy the QThread objects.
However, I wasn't able to parent the worker objects so it seems desirable to send them the deleteLater() signal, either when the thread function is finished or the GUI is destroyed. I've had my own code hang for not doing this.
Another enhancement I felt was necessary was was reimplement the closeEvent of the GUI (QWidget) such that the threads would be instructed to quit and then the GUI would wait until all the threads were finished. When I played with some of the other answers to this question, I got QThread destroyed errors.
Perhaps it will be useful to others. I certainly found it a useful exercise. Perhaps others will know a better way for a thread to announce it identity.
#!/usr/bin/env python3
#coding:utf-8
# Author: --<>
# Purpose: To demonstrate creation of multiple threads and identify the receipt of thread results
# Created: 19/12/15
import sys
from PyQt4.QtCore import QThread, pyqtSlot, pyqtSignal
from PyQt4.QtGui import QApplication, QLabel, QWidget, QGridLayout
import sys
import worker
class Thread(QThread):
#make new signals to be able to return an id for the thread
startedx = pyqtSignal(int)
finishedx = pyqtSignal(int)
def __init__(self,i,parent=None):
super().__init__(parent)
self.idd = i
self.started.connect(self.starttt)
self.finished.connect(self.finisheddd)
#pyqtSlot()
def starttt(self):
print('started signal from thread emitted')
self.startedx.emit(self.idd)
#pyqtSlot()
def finisheddd(self):
print('finished signal from thread emitted')
self.finishedx.emit(self.idd)
class Form(QWidget):
def __init__(self):
super().__init__()
self.initUI()
self.worker={}
self.threadx={}
self.i=0
i=0
#Establish the maximum number of threads the machine can optimally handle
#Generally relates to the number of processors
self.threadtest = QThread(self)
self.idealthreadcount = self.threadtest.idealThreadCount()
print("This machine can handle {} threads optimally".format(self.idealthreadcount))
while i <self.idealthreadcount:
self.setupThread(i)
i+=1
i=0
while i<self.idealthreadcount:
self.startThread(i)
i+=1
print("Main Gui running in thread {}.".format(self.thread()))
def setupThread(self,i):
self.worker[i]= worker.Worker(i) # no parent!
#print("Worker object runningt in thread {} prior to movetothread".format(self.worker[i].thread()) )
self.threadx[i] = Thread(i,parent=self) # if parent isn't specified then need to be careful to destroy thread
self.threadx[i].setObjectName("python thread{}"+str(i))
#print("Thread object runningt in thread {} prior to movetothread".format(self.threadx[i].thread()) )
self.threadx[i].startedx.connect(self.threadStarted)
self.threadx[i].finishedx.connect(self.threadFinished)
self.worker[i].finished.connect(self.workerFinished)
self.worker[i].intReady.connect(self.workerResultReady)
#The next line is optional, you may want to start the threads again without having to create all the code again.
self.worker[i].finished.connect(self.threadx[i].quit)
self.threadx[i].started.connect(self.worker[i].procCounter)
self.destroyed.connect(self.threadx[i].deleteLater)
self.destroyed.connect(self.worker[i].deleteLater)
#This is the key code that actually get the worker code onto another processor or thread.
self.worker[i].moveToThread(self.threadx[i])
def startThread(self,i):
self.threadx[i].start()
#pyqtSlot(int)
def threadStarted(self,i):
print('Thread {} started'.format(i))
print("Thread priority is {}".format(self.threadx[i].priority()))
#pyqtSlot(int)
def threadFinished(self,i):
print('Thread {} finished'.format(i))
#pyqtSlot(int)
def threadTerminated(self,i):
print("Thread {} terminated".format(i))
#pyqtSlot(int,int)
def workerResultReady(self,j,i):
print('Worker {} result returned'.format(i))
if i ==0:
self.label1.setText("{}".format(j))
if i ==1:
self.label2.setText("{}".format(j))
if i ==2:
self.label3.setText("{}".format(j))
if i ==3:
self.label4.setText("{}".format(j))
#print('Thread {} has started'.format(self.threadx[i].currentThreadId()))
#pyqtSlot(int)
def workerFinished(self,i):
print('Worker {} finished'.format(i))
def initUI(self):
self.label1 = QLabel("0")
self.label2= QLabel("0")
self.label3= QLabel("0")
self.label4 = QLabel("0")
grid = QGridLayout(self)
self.setLayout(grid)
grid.addWidget(self.label1,0,0)
grid.addWidget(self.label2,0,1)
grid.addWidget(self.label3,0,2)
grid.addWidget(self.label4,0,3) #Layout parents the self.labels
self.move(300, 150)
self.setGeometry(0,0,300,300)
#self.size(300,300)
self.setWindowTitle('thread test')
self.show()
def closeEvent(self, event):
print('Closing')
#this tells the threads to stop running
i=0
while i <self.idealthreadcount:
self.threadx[i].quit()
i+=1
#this ensures window cannot be closed until the threads have finished.
i=0
while i <self.idealthreadcount:
self.threadx[i].wait()
i+=1
event.accept()
if __name__=='__main__':
app = QApplication(sys.argv)
form = Form()
sys.exit(app.exec_())
And the worker code below
#!/usr/bin/env python3
#coding:utf-8
# Author: --<>
# Purpose: Stack Overflow
# Created: 19/12/15
import sys
import unittest
from PyQt4.QtCore import QThread, QObject, pyqtSignal, pyqtSlot
import time
import random
class Worker(QObject):
finished = pyqtSignal(int)
intReady = pyqtSignal(int,int)
def __init__(self, i=0):
'''__init__ is called while the worker is still in the Gui thread. Do not put slow or CPU intensive code in the __init__ method'''
super().__init__()
self.idd = i
#pyqtSlot()
def procCounter(self): # This slot takes no params
for j in range(1, 10):
random_time = random.weibullvariate(1,2)
time.sleep(random_time)
self.intReady.emit(j,self.idd)
print('Worker {0} in thread {1}'.format(self.idd, self.thread().idd))
self.finished.emit(self.idd)
if __name__=='__main__':
unittest.main()
PySide2 Solution:
Unlike in PyQt5, in PySide2 the QThread.started signal is received/handled on the original thread, not the worker thread! Luckily it still receives all other signals on the worker thread.
In order to match PyQt5's behavior, you have to create the started signal yourself.
Here is an easy solution:
# Use this class instead of QThread
class QThread2(QThread):
# Use this signal instead of "started"
started2 = Signal()
def __init__(self):
QThread.__init__(self)
self.started.connect(self.onStarted)
def onStarted(self):
self.started2.emit()

Why does input() cause "QCoreApplication::exec: The event loop is already running"?

I have run into this QCoreApplication problem where invoking input() after a QObject finishes executing inside a QThread causes an infinite loop printing to the console "QCoreApplication::exec: The event loop is already running".
In the code I create a generic worker as a QObject, move it into a QThread (the sanctioned way to use QThread, instead of subclassing it) and then execute another QObject's (Master class) function inside the generic worker. Everything works fine as long as I don't call input() after the Master has been executed. Note that the problem occurs also if I execute a function directly in the worker (not a Master instance's function).
Here is the sample code to reproduce the problem:
import sys
from PyQt4.QtCore import QCoreApplication, QObject, QThread, pyqtSignal, pyqtSlot
class Worker(QObject):
"""
Generic worker.
"""
start = pyqtSignal(str)
finished = pyqtSignal()
def __init__(self, function):
QObject.__init__(self)
self._function = function
self.start.connect(self.run)
def run(self):
self._function()
self.finished.emit()
class Master(QObject):
"""
An object that will use the worker class.
"""
finished = pyqtSignal()
def __init__(self):
QObject.__init__(self)
#pyqtSlot()
def do(self):
print("Do what?")
self.finished.emit()
def done():
# FIXME This will cause an infinite loop printing to the console:
# "QCoreApplication::exec: The event loop is already running"
input("Enter your answer: ")
def main():
app = QCoreApplication(sys.argv)
master = Master()
worker = Worker(master.do)
master.finished.connect(done)
thread = QThread()
thread.started.connect(worker.run)
worker.moveToThread(thread)
# Terminating thread gracefully, or so.
worker.finished.connect(thread.quit)
worker.finished.connect(worker.deleteLater)
thread.finished.connect(thread.deleteLater)
thread.start()
sys.exit(app.exec_())
if __name__ == "__main__":
main()
There is no real problem with input in your example. After pressing enter in done(), control will return to the event-loop and then wait for further user interaction - which is the normal and expected behaviour.
You don't make it clear what you expect to happen after that. But if you want the program to quit, just do this:
def done():
input("Enter your answer: ")
QCoreApplication.quit()
The Qt warning message is harmless, but it can be removed like this:
def main():
from PyQt4.QtCore import pyqtRemoveInputHook
pyqtRemoveInputHook()
app = QCoreApplication(sys.argv)
...
The only real problem in your example is the threading implementation. If you add the line print(QThread.currentThread()) to Worker.run(), Master.do() and main(), you will see that all three are executed in the main thread. This is because you connected the thread.start signal before moving the worker to the other thread. The best (i.e. most easily maintainable) way to fix this issue to always use the #pyqtSlot decorator on any slots that are connected across threads - because then it won't matter when the signal connections are made. (See this answer for a more complete explanation of this issue).

Using threads in PyGObject with GTK3 in python3

So I am trying to use threads to implement a blocking operation in a Python3 based application.
#!/usr/bin/env python3
import gi, os, threading, Skype4Py
gi.require_version('Gtk', '3.0')
from gi.repository import Gtk, GLib, GObject
skype = Skype4Py.Skype()
def ConnectSkype():
skype.Attach()
class Contacts_Listbox_Row(Gtk.ListBoxRow):
def __init__(self, name):
# super is not a good idea, needs replacement.
super(Gtk.ListBoxRow, self).__init__()
self.names = name
self.add(Gtk.Label(label=name))
class MainInterfaceWindow(Gtk.Window):
"""The Main User UI"""
def __init__(self):
Gtk.Window.__init__(self, title="Python-GTK-Frontend")
# Set up Grid object
main_grid = Gtk.Grid()
self.add(main_grid)
# Create a listbox which will contain selectable contacts
contacts_listbox = Gtk.ListBox()
for handle, name in self.GetContactTuples():
GLib.idle_add(contacts_listbox.add, Contacts_Listbox_Row(name))
GLib.idle_add(main_grid.add, contacts_listbox)
# Test label for debug
label = Gtk.Label()
label.set_text("Test")
GLib.idle_add(main_grid.attach_next_to, label, contacts_listbox, Gtk.PositionType.TOP, 2, 1)
def GetContactTuples(self):
"""
Returns a list of tuples in the form: (username, display name).
Return -1 if failure.
"""
print([(user.Handle, user.FullName) for user in skype.Friends]) # debug
return [(user.Handle, user.FullName) for user in skype.Friends]
if __name__ == '__main__':
threads = []
thread = threading.Thread(target=ConnectSkype) # potentially blocking operation
thread.start()
threads.append(thread)
main_window = MainInterfaceWindow()
main_window.connect("delete-event", Gtk.main_quit)
main_window.show_all()
print('Calling Gtk.main')
Gtk.main()
The basic idea is this simple program should fetch a list of contacts from the Skype API, and build a list of tuples. The GetContactTuples function succeeds in its design, the print call I placed verifies that. However, the program hangs indefinitely, and never renders an interface. Sometimes, it will yield random errors involving threads and/or resource availability. Once such error is
(example.py:31248): Gdk-WARNING **: example.py: Fatal IO error 11 (Resource temporarily unavailable) on X server :1.
I know it is related to the use of threads, but based on the documentation here, it seems like just adding GLib.idle_add calls before interface updates should be sufficient. So the questions are, why does this not work, and how could I correct the above sample?
UPDATE:
If GLib.idle_add is prepended to every line that interacts with GTK that it can be, I get a different error.
[xcb] Unknown request in queue while dequeuing
[xcb] Most likely this is a multi-threaded client and XInitThreads has not been called
[xcb] Aborting, sorry about that.
python: xcb_io.c:179: dequeue_pending_request: Assertion '!xcb_xlib_unknown_req_in_deq' failed.
Aborted (core dumped)
Depending on your library version (this was no longer necessary in Gobject 3.10.2) you might need to actually need to explicitly initialize your threads using GObject.threads_init() as below:
if __name__ == '__main__':
threads = []
thread = threading.Thread(target=ConnectSkype) # potentially blocking operation
thread.start()
threads.append(thread)
main_window = MainInterfaceWindow()
main_window.connect("delete-event", Gtk.main_quit)
GObject.threads_init()
main_window.show_all()
print('Calling Gtk.main')
Gtk.main()

Why does pyInstaller packed program creates guardian when creating a new process on Windows?

I am trying to write an alarm clock program in python using multiprogramming module on Windows 7.
It all runs good in the interpreter. But when packed in one file by pyinstaller, every time the code create a process, there turn out to be 2 processes, one is a parent and the other is its child. When the code kills the parent process, the child become an orphan process.
The code:
from multiprocessing import Process,freeze_support
import time
import winsound
def startout(seconds,name):
freeze_support()
print name+':pid '+str(os.getpid())+' is created'
startTime=time.time()
while (time.time()-startTime)<seconds:
time.sleep(1)
winsound.PlaySound('SystemQuestion', winsound.SND_ALIAS)
print name+' end'
class alarmCenter:
def __init__(self):
self.alarmList={'alarm1':None,'alarm2':None,'alarm3':None}
def newAlarm(self,seconds,name):
if self.alarmList[name]!=None:
if self.alarmList[name].is_alive():
return False
ala=Process(target=startout, args=(seconds,name))
ala.deamon=True
ala.start()
self.alarmList[name]=ala
return True
def stopAlarm(self,name):
try:
self.alarmList[name].terminate()
self.alarmList[name].join()
self.alarmList[name]=None
except Exception:
pass
def terminateAll(self):
for each in self.alarmList.keys():
if self.alarmList[each]!=None:
self.alarmList[each].terminate()
if __name__=='__main__':
freeze_support()
#....
Note that multiprocessing.freeze_support() is already there.
Could anyone please show me how to kill the child process or fix this bug?

Categories