PyQt4 #pyqtSlot: what is the result kwarg for? - python

By reading this, two questions came up:
1.
It says
it is sometimes necessary to explicitly mark a Python method as being
a Qt slot
While I always use the #pyqtSlot decorator because it says:
Connecting a signal to a decorated Python method also has the
advantage of reducing the amount of memory used and is slightly faster
I ask myself: in which specific cases is it necessary?
and: Are there any advantages of not using the #pyqtSlot decorator?
2.
The result keyword argument, what is its purpose?
#pyqtSlot(int, result=int)
def foo(self, arg1):
""" C++: int foo(int) """
It looks like the return value's type, but AFAIK you cannot retrieve return values when emitting signals.
Any ideas about that?

It is faster because of PyQt architecture. PyQt converts the python slots to C++ slots, in order to communicate with the Qt framework. When you explicitly mark a Python method as a Qt slot and provide a C++ signature for it, the PyQt code doesn't have to guess the C++ signature itself, as it is already specified. This may enhance performance on heavy projects.
The return value is only needed when you want to call the slot as a normal function.
Edit: It seems that the QMetaObject.invokeMethod method executes a slot and uses it's return value.

Related

Trigger wrapped C/C++ rebuild when instantiating a Julia "object" (that calls the wrapped C/C++ functions)

I found the following paradigm quite useful and I would love to be able to reproduce it somehow in Julia to take advantage of Julia's speed and C wrapping capabilities.
I normally maintain a set of objects in Python/Matlab that represent the blocks (algorithms) of a pipeline. Set up unit-tests etc.
Then develop the equivalent C/C++ code by having equivalent python/Matlab objects (same API) that wrap C/C++ to implement the same functionality and have to pass the same tests (by this I mean the exact same tests written in python/Matlab where either I generate synthetic data or I load recorded data).
I will maintain the full-python and python/C++ objects in parallel enforcing parity with extensive test suites. The python only and python/C++ versions are fully interchangeable.
Every time I need to modify the behavior of the pipeline, or debug an issue, I first use the fully pythonic version of the specific object/block I need to modify, typically in conjunction with other blocks running in python/C++ mode for speed, then update the tests to match the behavior of the modified python block and finally update the C++ version until it reaches parity and passes the updated tests.
Evey time I instantiate the Python/C++ version on the block, in the constructor I run a "make" that rebuilds the C++ code if there was any modification. To make sure I always test the latest version of the C++.
Is there any elegant way to reproduce the same paradigm with the Julia/C++ combination? Maintaining julia/C++ versions in parallel via automatic testing.
I.e. how do I check/rebuild the C++ only once when I instantiate the object and not per function call (it would be way too slow).
I guess I could call the "make" once at the test-suite level before I run all the tests of the different blocks. But then I will have to manually call it if I'm writing a quick python script for a debugging session.
Let's pick the example of a little filter object with a configure method that changes the filter parameters and a filter method the filters the incoming data.
We will have something like:
f1 = filter('python');
f2 = filter('C++'); % rebuild C++ as needed
f1.configure(0.5);
f2.configure(0.5);
x1 = data;
x2 = data;
xf1 = f1.filter(x1);
xf2 = f2.filter(x2);
assert( xf1 == xf2 )
In general there will be a bunch of tests that instantiate the objects in both python-only mode or python/C++ mode and test them.
I guess what I'm trying to say is that since in Julia the paradigm is to have a filter type, and then "external" methods that modify/use the filter type there is no centralized way to check/rebuild all its methods that wrap C code. Unless the type contains a list of variable that keep track of the relevant methods. Seems awkward.
I would appreciate comments / ideas.
Is there a reason why you can't wrap your functions in a struct like this?
struct Filter
FilterStuff::String
Param::Int
function Filter(stuff::String, param::Int)
# Make the C++ code here
# Return the created object here
obj = new(stuff, param)
end
end

enforcing python function parameters types from docstring

Both epydoc and Sphinx document generators permit the coder to annotate what the types should be of any/all function parameter.
My question is: Is there a way (or module) that enforces these types (at run-time) when documented in the docstring. This wouldn't be strong-typing (compile-time checking), but (more likely) might be called firm-typing (run-time checking). Maybe raising a "ValueError", or even better still... raising a "SemanticError"
Ideally there would already be something (like a module) similar to the "import antigravity" module as per xkcd, and this "firm_type_check" module would already exist somewhere handy for download.
FYI: The docstring for epydoc and sphinz are as follows:
epydoc:
Functions and Methods parameters:
#param p: ... # A description of the parameter p for a function or method.
#type p: ... # The expected type for the parameter p.
#return: ... # The return value for a function or method.
#rtype: ... # The type of the return value for a function or method.
#keyword p: ... # A description of the keyword parameter p.
#raise e: ... # A description of the circumstances under which a function or method
raises exception e.
Sphinx: Inside Python object description directives, reST field lists with these fields are recognized and formatted nicely:
param, parameter, arg, argument, key, keyword: Description of a parameter.
type: Type of a parameter.
raises, raise, except, exception: That (and when) a specific exception is raised.
var, ivar, cvar: Description of a variable.
returns, return: Description of the return value.
rtype: Return type.
The closest I could find was a mention by Guido in mail.python.org and created by Jukka Lehtosalo at Mypy Examples. CMIIW: mypy cannot be imported as a py3 module.
Similar stackoverflow questions that do not use the docstring per se:
Pythonic Way To Check for A Parameter Type
What's the canonical way to check for type in python?
To my knowledge, nothing of the sort exists, for a few important reasons:
First, docstrings are documentation, just like comments. And just like comments, people will expect them to have no effect on the way your program works. Making your program's behavior depend on its documentation is a major antipattern, and a horrible idea all around.
Second, docstrings aren't guaranteed to be preserved. If you run python with -OO, for example, all docstrings are removed. What then?
Finally, Python 3 introduced optional function annotations, which would serve that purpose much better: http://legacy.python.org/dev/peps/pep-3107/ . Python currently does nothing with them (they're documentation), but if I were to write such a module, I'd use those, not docstrings.
My honest opinion is this: if you're gonna go through the (considerable) trouble of writing a (necessarily half-baked) static type system for Python, all the time it will take you would be put to better use by learning another programming language that supports static typing in a less insane way:
Clojure (http://clojure.org/) is incredibly dynamic and powerful (due to its nature as a Lisp) and supports optional static typing through core.typed (https://github.com/clojure/core.typed). It is geared towards concurrency and networking (it has STM and persistent data structures <3 ), has a great community, and is one of the most elegantly-designed languages I've seen. That said, it runs on the JVM, which is both a good and a bad thing.
Golang (http://golang.org/) feels sort-of Pythonic (or at least, it's attracting a lot of refugees from Python), is statically typed and compiles to native code.
Rust (http://www.rust-lang.org/) is lower-level than that, but it has one of the best type systems I've seen (type inference, pattern matching, traits, generics, zero-sized types...) and enforces memory and resource safety at compile time. It is being developed by Mozilla as a language to write their next browser (Servo) in, so performance and safety are its main goals. You can think of it as a modern take on C++. It compiles to native code, but hasn't hit 1.0 yet and as such, the language itself is still subject to change. Which is why I wouldn't recommend writing production code in it yet.

Using gpointer with Python and GObject introspection

I am learning about Python and Gtk 3, using GObject introspection. I have done some samples, and I am begining to understant how it works. But there is one thing I can't understand, How can I pass a gpointer param?
I trying to use a function who receive a buffer (with gpointer), and I always end with this message:
could not convert value for property `pixels' from LP_c_ubyte to gpointer
(LP_c_ubyte was my last try, but I have prove a lot of types)
You can't pass a pointer in GObject introspection. If the introspected function is one you wrote yourself, then you should annotate your parameter documentation with, for example, (array length=buflen), where buflen is the name for the parameter that gives the length of the buffer. See the linked page for more information.
If the function is in a library that you didn't write yourself, either look around in the API for a more introspection-friendly function, or file a bug with the library. APIs using bare gpointers shouldn't even be exposed to Python.

Why does PySide still require QString for textChanged SIGNAL?

My previous understanding was that PySide does not require QString. Yet I found that PySide (I am using v1.2.1 with Python v2.7.5) appears to be inconsistent about this depending on how I wire up my Signals and Slots:
# This Way of wiring up signals and slots requires str
le = QLineEdit()
slotOnChanged = self.onChanged
le.textChanged[str].connect(slotOnChanged)
# But this Way requires QString
le = QLineEdit()
signalTextChanged = SIGNAL("textChanged(QString)")
slotOnChanged = self.onChanged
le.connect(signalTextChanged, slotOnChanged)
You are correct that PySide does not require the QString class, but are mistaken that there is any inconsistency.
PySide/PyQt are generally quite thin wrappers around Qt's C++ library. However, the signal and slot syntax is one of the few areas where there is a significant deviation from this. The syntax used by Qt (the second method in your examples), requires detailed knowledge of C++ signature arguments, and is easy to get wrong. Also, from a python perspective, the syntax is needlessly verbose, and rather "unpythonic". Because of this, PySide/PyQt have added some syntactic "sugar" which provides an alternative, more pythonic, syntax for signals and slots. This is used by the first method in your examples.
The specific reason why QString has to be used in the second method, is that Qt signals are defined statically as part of a C++ class. So the argument to SIGNAL has to quite precisely match the C++ definition in order for the connection to be successful. But note that the argument to SIGNAL doesn't require the QString class to be available - it just requires the use of the string "QString" in the signature.
The "old", C++ way of connecting signals will probably remain available for quite some time (for backward compatibillity if nothing else). But it's generally better to use the "new", pythonic way whenever possible. It's just much clearer, and more readable.
In short: efficiency.
A QString is made up of QChar(s). These provide cross platform compatibility between the C++ and Python bindings (as well as for easier language translation); a string in Python versus a string in C++ are usually different. Qt provides their own.
textChanged(QString) uses QString because ...
1) it can provide a more agnostic type between the language bindings,
2) it avoids the type conversion that happens in the first example and is more efficient.
Here is the detailed description of QString. Note the link on implicit sharing.
Here are examples of other possibilities that are more idiomatic for PySide.

Passing C function pointers between two python modules

I'm writing an application working with plugins. There are two types of plugins: Engine and Model. Engine objects have an update() method that call the Model.velocity() method.
For performance reasons these methods are allowed to be written in C. This means that sometimes they will be written in Python and sometimes written in C.
The problem is that this forces to do an expensive Python function call of Model.velocity() in Engine.update() (and also reacquiring the GIL). I thought about adding something like Model.get_velocity_c_func() to the API, that would allow Model implementations to return a pointer to the C version of their velocity() method if available, making possible for Engine to do a faster C function call.
What data type should I use to pass the function pointer ? And is this a good design at all, maybe there is an easier way ?
The CObject (PyCOBject) data type exists for this purpose. It holds a void*, but you can store any data you wish. You do have to be careful not to pass the wrong CObject to the wrong functions, as some other library's CObjects will look just like your own.
If you want more type security, you could easily roll your own PyType for this; all it has to do, after all, is contain a pointer of the right type.

Categories