PyQt & unittest - Testing signal and slots - python

I have a pyqt application that I'm writing unit tests for, and it relies heavily on signals and slots. To properly test it, I have to check that the correct signals are sent.
What is the best way to do this? I see that the Qt library has a QSignalSpy, but I can't find any reference to this in PyQt. The only option I can think of is to mock emit, e.g.
import testedmodule
def myemit(signal):
....
testedmodule.QObject.emit = myemit
but I'm hoping there is a better way.
Edit:
My module is run as a thread, in that case overriding emit of an instance no longer worked after starting the thread so I updated the code above to reflect this.

You can try connecting a slot to your signal, prepare your test, then call qApp.processEvents() to let the signal propagate. But I don't think it's 100% reliable.
It's a pity that QSignalSpy is not part of PyQt indeed.

This is a more elaborate version of what I suggested myself, not necessarily the best solution for unittest, but I think it will be of interest to others that come across this:
Posted by Carlos Scheidegger on the pyqt mailing list (http://thread.gmane.org/gmane.comp.python.pyqt-pykde/9242/focus=9245)
_oldConnect = QtCore.QObject.connect
_oldDisconnect = QtCore.QObject.disconnect
_oldEmit = QtCore.QObject.emit
def _wrapConnect(callableObject):
"""Returns a wrapped call to the old version of QtCore.QObject.connect"""
#staticmethod
def call(*args):
callableObject(*args)
_oldConnect(*args)
return call
def _wrapDisconnect(callableObject):
"""Returns a wrapped call to the old version of QtCore.QObject.disconnect"""
#staticmethod
def call(*args):
callableObject(*args)
_oldDisconnect(*args)
return call
def enableSignalDebugging(**kwargs):
"""Call this to enable Qt Signal debugging. This will trap all
connect, and disconnect calls."""
f = lambda *args: None
connectCall = kwargs.get('connectCall', f)
disconnectCall = kwargs.get('disconnectCall', f)
emitCall = kwargs.get('emitCall', f)
def printIt(msg):
def call(*args):
print msg, args
return call
QtCore.QObject.connect = _wrapConnect(connectCall)
QtCore.QObject.disconnect = _wrapDisconnect(disconnectCall)
def new_emit(self, *args):
emitCall(self, *args)
_oldEmit(self, *args)
QtCore.QObject.emit = new_emit
just call enableSignalDebugging(emitCall=foo) and spy your signals until
you're sick to your stomach :)

Note QSignalSpy is available as QtTest.QSignalSpy in PyQt5.

Related

Add/Change channel for handler at runtime

In circuits 3.1.0, is there a way to set at runtime the channel for a handler?
An useful alternative would be to add a handler at runtime and specify the channel.
I've checked the Manager.addHandler implementation but couldn't make it work. I tried:
self._my_method.__func__.channel = _my_method_channel
self._my_method.__func__.names = ["event name"]
self.addHandler(self._my_method)
Yes there is; however it's not really a publically exposed API.
Example: (of creating event handlers at runtime)
#handler("foo")
def on_foo(self):
return "Hello World!"
def test_addHandler():
m = Manager()
m.start()
m.addHandler(on_foo)
This is taken from tests.core.test_dynamic_handlers
NB: Every BaseComponent/Component subclass is also a subclass of Manager and has the .addHandler() and .removeHandler() methods. You can also set the #handler() dynamically like this:
def on_foo(...):
...
self.addHandler(handler("foo")(on_foo))
You can also see a good example of this in the library itself with circuits.io.process where we dynamically create event handlers for stdin, stdout and stderr.

How to spawn threads in pyobjc

I am learning how to use pyobjc for some basic prototyping. Right now I have a main UI set up and a python script that runs the main application. The only issue is when the script runs, the script runs on the main thread thus blocking the UI.
So this is my sample code snippet in that I attempted in python using the threading import:
def someFunc(self):
i = 0
while i < 20:
NSLog(u"Hello I am in someFunc")
i = i + 1
#objc.IBAction
def buttonPress(self, sender):
thread = threading.Thread(target=self.threadedFunc)
thread.start()
def threadedFunc(self):
NSLog(u"Entered threadedFunc")
self.t = NSTimer.NSTimer.scheduledTimerWithTimeInterval_target_selector_userInfo_repeats_(1/150., self,self.someFunc,None, True)
NSLog(u"Kicked off Runloop")
NSRunLoop.currentRunLoop().addTimer_forMode_(self.t,NSDefaultRunLoopMode)
When clicking on the button, the NSLogs in threadedFunc prints out to console, but it never enters someFunc
So I decided to use NSThread to kick off a thread. On Apple's documentation the Objective-C call looks like this:
(void)detachNewThreadSelector:(SEL)aSelector
toTarget:(id)aTarget
withObject:(id)anArgument
So I translated that to what I interpreted as pyobjc rules for calling objective-c function:
detachNewThreadSelector_aSelector_aTarget_anArgument_(self.threadedFunc, self, 1)
So in context the IBAction function looks like this:
#objc.IBAction
def buttonPress(self, sender):
detachNewThreadSelector_aSelector_aTarget_anArgument_(self.threadedFunc, self, 1)
But when the button is pressed, I get this message: global name 'detachNewThreadSelector_aSelector_aTarget_anArgument_' is not defined.
I've also tried similar attempts with grand central dispatch, but the same message kept popping up of global name some_grand_central_function is not defined
Clearly I am not understanding the nuances of python thread, or the pyobjc calling conventions, I was wondering if some one could shed some light on how to proceed.
So I got the result that I wanted following the structure below. Like I stated in my response to the comments: For background thread, NSThread will not allow you to perform certain tasks. (i.e update certain UI elements, prints, etc). So I used performSelectorOnMainThread_withObject_waitUntilDone_ for things that I needed to perform in between thread operations. The operations were short and not intensive so it didn't affect the performance as much. Thank you Michiel Kauw-A-Tjoe for pointing me in the right direction!
def someFunc(self):
i = 0
someSelector = objc.selector(self.someSelector, signature='v#:')
while i < 20:
self.performSelectorOnMainThread_withObject_waitUntilDone(someSelector, None, False)
NSLog(u"Hello I am in someFunc")
i = i + 1
#objc.IBAction
def buttonPress(self, sender):
NSThread.detachNewThreadSelector_toTarget_withObject_(self.threadedFunc, self, 1)
def threadedFunc(self):
NSLog(u"Entered threadedFunc")
self.t = NSTimer.NSTimer.scheduledTimerWithTimeInterval_target_selector_userInfo_repeats_(1/150., self,self.someFunc,None, True)
NSLog(u"Kicked off Runloop")
self.t.fire()
The translated function name should be
detachNewThreadSelector_toTarget_withObject_(aSelector, aTarget, anArgument)
You're currently applying the conversion rule to the arguments part instead of the Objective-C call parts. Calling the function with the arguments from your example:
detachNewThreadSelector_toTarget_withObject_(self.threadedFunc, self, 1)

How to connect to Cassandra inside a Pylons app?

I created a new Pylons project, and would like to use Cassandra as my database server. I plan on using Pycassa to be able to use cassandra 0.7beta.
Unfortunately, I don't know where to instantiate the connection to make it available in my application.
The goal would be to :
Create a pool when the application is launched
Get a connection from the pool for each request, and make it available to my controllers and libraries (in the context of the request). The best would be to get a connexion from the pool "lazily", i.e. only if needed
If a connexion has been used, release it when the request has been processed
Additionally, is there something important I should know about it ? When I see some comments like "Be careful when using a QueuePool with use_threadlocal=True, especially with retries enabled. Synchronization may be required to prevent the connection from changing while another thread is using it.", what does it mean exactly ?
Thanks.
--
Pierre
Well. I worked a little more. In fact, using a connection manager was probably not a good idea as this should be the template context. Additionally, opening a connection for each thread is not really a big deal. Opening a connection per request would be.
I ended up with just pycassa.connect_thread_local() in app_globals, and there I go.
Okay.
I worked a little, I learned a lot, and I found a possible answer.
Creating the pool
The best place to create the pool seems to be in the app_globals.py file, which is basically a container for objects which will be accessible "throughout the life of the application". Exactly what I want for a pool, in fact.
I just added at the end of the file my init code, which takes settings from the pylons configuration file :
"""Creating an instance of the Pycassa Pool"""
kwargs = {}
# Parsing servers
if 'cassandra.servers' in config['app_conf']:
servers = config['app_conf']['cassandra.servers'].split(',')
if len(servers):
kwargs['server_list'] = servers
# Parsing timeout
if 'cassandra.timeout' in config['app_conf']:
try:
kwargs['timeout'] = float(config['app_conf']['cassandra.timeout'])
except:
pass
# Finally creating the pool
self.cass_pool = pycassa.QueuePool(keyspace='Keyspace1', **kwargs)
I could have done better, like moving that in a function, or supporting more parameters (pool size, ...). Which I'll do.
Getting a connection at each request
Well. There seems to be the simple way : in the file base.py, adding something like c.conn = g.cass_pool.get() before calling WSGIController, something like c.conn.return_to_pool() after. This is simple, and works. But this gets a connection from the pool even when it's not required by the controller. I have to dig a little deeper.
Creating a connection manager
I had the simple idea to create a class which would be instantiated at each request in the base.py file, and which would automatically grab a connection from the pool when requested (and release it after). This is a really simple class :
class LocalManager:
'''Requests a connection from a Pycassa Pool when needed, and releases it at the end of the object's life'''
def __init__(self, pool):
'''Class constructor'''
assert isinstance(pool, Pool)
self._pool = pool
self._conn = None
def get(self):
'''Grabs a connection from the pool if not already done, and returns it'''
if self._conn is None:
self._conn = self._pool.get()
return self._conn
def __getattr__(self, key):
'''It's cooler to write "c.conn" than "c.get()" in the code, isn't it?'''
if key == 'conn':
return self.get()
else:
return self.__dict__[key]
def __del__(self):
'''Releases the connection, if needed'''
if not self._conn is None:
self._conn.return_to_pool()
Just added c.cass = CassandraLocalManager(g.cass_pool) before calling WSGIController in base.py, del(c.cass) after, and I'm all done.
And it works :
conn = c.cass.conn
cf = pycassa.ColumnFamily(conn, 'TestCF')
print cf.get('foo')
\o/
I don't know if this is the best way to do this. If not, please let me know =)
Plus, I still did not understand the "synchronization" part in Pycassa source code. If it is needed in my case, and what should I do to avoid problems.
Thanks.

Is there a way to call a function right before a PyQt application ends?

I am collecting usage stats for my applications which include how much each session lasts. However, I can't seem to be able to save this information because None Of the signals I tried yet actually succeeds to call my report_session function.
This are the signals I have already tried:
lastWindowClosed()
aboutToQuit()
destroyed()
Either these signals never get emitted or the application does not live long enough after that to run anything else. Here is my main:
app = QtGui.QApplication(sys.argv)
ui = MainWindow()
ui.app = app
QtCore.QObject.connect(ui, QtCore.SIGNAL("destroyed()"), ui.report_session)
ui.show()
logger.info('Started!')
splash.finish(ui)
sys.exit(app.exec_())
The method that Mark Byers posted will run after the main widget has been closed, meaning that its controls will no longer be available.
If you need to work with any values from controls on your form, you will want to capture the close event and do your work there:
class MainWidget(QtGui.QWidget):
#...
def closeEvent(self, event):
print "closing PyQtTest"
self.SaveSettings()
# report_session()
Also, see the Message Box example in the ZetCode tutorial First programs in PyQt4 toolkit (near the end of the page). This shows how to accept or cancel the close request.
Put the code between app.exec_ and sys.exit:
ret = app.exec_()
# Your code that must run when the application closes goes here
sys.exit(ret)
To ensure that a Python function gets called at process termination, in general (with or without Qt involved;-), you can use the atexit module of the standard Python library:
import atexit
def whatever(): ...
atexit.register(whatever)
Out of prudence I would recommend against using a bound method instead of a function for this purpose -- it "should" work, but the destruction-phase of a process is always somewhat delicate, and the simpler you keep it, the better.
atexit won't trigger for a sufficiently-hard crash of a process, of course (e.g., if the process is killed with a kill -9, then by definition it's not given a chance to run any termination code) -- the OS sees to that;-). If you need to handle any crash no matter how hard you must do so from a separate "watchdog" process, a substantially subtler issue.
Found this answer which involves overloading closeEvent().
it worked perfectly for me.

How to poll a file in /sys

I am stuck reading a file in /sys/ which contains the light intensity in Lux of the ambient light sensor on my Nokia N900 phone.
See thread on talk.maemo.org here
I tried to use pyinotify to poll the file but this looks some kind of wrong to me since the file is alway "process_IN_OPEN", "process_IN_ACCESS" and "process_IN_CLOSE_NOWRITE"
I basically want to get the changes ASAP and if something changed trigger an event, execute a class...
Here's the code I tried, which works, but not as I expected (I was hoping for process_IN_MODIFY to be triggered):
#!/usr/bin/env python
import os, time, pyinotify
import pyinotify
ambient_sensor = '/sys/class/i2c-adapter/i2c-2/2-0029/lux'
wm = pyinotify.WatchManager() # Watch Manager
mask = pyinotify.ALL_EVENTS
def action(self, the_event):
value = open(the_event.pathname, 'r').read().strip()
return value
class EventHandler(pyinotify.ProcessEvent):
...
def process_IN_MODIFY(self, event):
print "MODIFY event:", action(self, event)
...
#log.setLevel(10)
notifier = pyinotify.ThreadedNotifier(wm, EventHandler())
notifier.start()
wdd = wm.add_watch(ambient_sensor, mask)
wdd
time.sleep(5)
notifier.stop()
Update 1:
Mmmh, all I came up without having a clue if there is a special mechanism is the following:
f = open('/sys/class/i2c-adapter/i2c-2/2-0029/lux')
while True:
value = f.read()
print value
f.seek(0)
This, wrapped in a own thread, could to the trick, but does anyone have a smarter, less CPU-hogging and faster way to get the latest value?
Since the /sys/file is a pseudo-file which just presents a view on an underlying, volatile operating system value, it makes sense that there would never be a modify event raised. Since the file is "modified" from below it doesn't follow regular file-system semantics.
If a modify event is never raised, using a package like pinotify isn't going to get you anywhere. 'twould be better to look for a platform-specific mechanism.
Response to Update 1:
Since the N900 maemo runtime supports GFileMonitor, you'd do well to check if it can provide the asynchronous event that you desire.
Busy waiting - as I gather you know - is wasteful. On a phone it can really drain a battery. You should at least sleep in your busy loop.
Mmmh, all I came up without having a clue if there is a special mechanism is the following:
f = open('/sys/class/i2c-adapter/i2c-2/2-0029/lux')
while True:
value = f.read()
print value
f.seek(0)
This, wrapped in a own thread, could to the trick, but does anyone have a smarter, less CPU-hogging and faster way to get the latest value?
Cheers
Bjoern

Categories