I am learning about Python and Gtk 3, using GObject introspection. I have done some samples, and I am begining to understant how it works. But there is one thing I can't understand, How can I pass a gpointer param?
I trying to use a function who receive a buffer (with gpointer), and I always end with this message:
could not convert value for property `pixels' from LP_c_ubyte to gpointer
(LP_c_ubyte was my last try, but I have prove a lot of types)
You can't pass a pointer in GObject introspection. If the introspected function is one you wrote yourself, then you should annotate your parameter documentation with, for example, (array length=buflen), where buflen is the name for the parameter that gives the length of the buffer. See the linked page for more information.
If the function is in a library that you didn't write yourself, either look around in the API for a more introspection-friendly function, or file a bug with the library. APIs using bare gpointers shouldn't even be exposed to Python.
Related
Both epydoc and Sphinx document generators permit the coder to annotate what the types should be of any/all function parameter.
My question is: Is there a way (or module) that enforces these types (at run-time) when documented in the docstring. This wouldn't be strong-typing (compile-time checking), but (more likely) might be called firm-typing (run-time checking). Maybe raising a "ValueError", or even better still... raising a "SemanticError"
Ideally there would already be something (like a module) similar to the "import antigravity" module as per xkcd, and this "firm_type_check" module would already exist somewhere handy for download.
FYI: The docstring for epydoc and sphinz are as follows:
epydoc:
Functions and Methods parameters:
#param p: ... # A description of the parameter p for a function or method.
#type p: ... # The expected type for the parameter p.
#return: ... # The return value for a function or method.
#rtype: ... # The type of the return value for a function or method.
#keyword p: ... # A description of the keyword parameter p.
#raise e: ... # A description of the circumstances under which a function or method
raises exception e.
Sphinx: Inside Python object description directives, reST field lists with these fields are recognized and formatted nicely:
param, parameter, arg, argument, key, keyword: Description of a parameter.
type: Type of a parameter.
raises, raise, except, exception: That (and when) a specific exception is raised.
var, ivar, cvar: Description of a variable.
returns, return: Description of the return value.
rtype: Return type.
The closest I could find was a mention by Guido in mail.python.org and created by Jukka Lehtosalo at Mypy Examples. CMIIW: mypy cannot be imported as a py3 module.
Similar stackoverflow questions that do not use the docstring per se:
Pythonic Way To Check for A Parameter Type
What's the canonical way to check for type in python?
To my knowledge, nothing of the sort exists, for a few important reasons:
First, docstrings are documentation, just like comments. And just like comments, people will expect them to have no effect on the way your program works. Making your program's behavior depend on its documentation is a major antipattern, and a horrible idea all around.
Second, docstrings aren't guaranteed to be preserved. If you run python with -OO, for example, all docstrings are removed. What then?
Finally, Python 3 introduced optional function annotations, which would serve that purpose much better: http://legacy.python.org/dev/peps/pep-3107/ . Python currently does nothing with them (they're documentation), but if I were to write such a module, I'd use those, not docstrings.
My honest opinion is this: if you're gonna go through the (considerable) trouble of writing a (necessarily half-baked) static type system for Python, all the time it will take you would be put to better use by learning another programming language that supports static typing in a less insane way:
Clojure (http://clojure.org/) is incredibly dynamic and powerful (due to its nature as a Lisp) and supports optional static typing through core.typed (https://github.com/clojure/core.typed). It is geared towards concurrency and networking (it has STM and persistent data structures <3 ), has a great community, and is one of the most elegantly-designed languages I've seen. That said, it runs on the JVM, which is both a good and a bad thing.
Golang (http://golang.org/) feels sort-of Pythonic (or at least, it's attracting a lot of refugees from Python), is statically typed and compiles to native code.
Rust (http://www.rust-lang.org/) is lower-level than that, but it has one of the best type systems I've seen (type inference, pattern matching, traits, generics, zero-sized types...) and enforces memory and resource safety at compile time. It is being developed by Mozilla as a language to write their next browser (Servo) in, so performance and safety are its main goals. You can think of it as a modern take on C++. It compiles to native code, but hasn't hit 1.0 yet and as such, the language itself is still subject to change. Which is why I wouldn't recommend writing production code in it yet.
My previous understanding was that PySide does not require QString. Yet I found that PySide (I am using v1.2.1 with Python v2.7.5) appears to be inconsistent about this depending on how I wire up my Signals and Slots:
# This Way of wiring up signals and slots requires str
le = QLineEdit()
slotOnChanged = self.onChanged
le.textChanged[str].connect(slotOnChanged)
# But this Way requires QString
le = QLineEdit()
signalTextChanged = SIGNAL("textChanged(QString)")
slotOnChanged = self.onChanged
le.connect(signalTextChanged, slotOnChanged)
You are correct that PySide does not require the QString class, but are mistaken that there is any inconsistency.
PySide/PyQt are generally quite thin wrappers around Qt's C++ library. However, the signal and slot syntax is one of the few areas where there is a significant deviation from this. The syntax used by Qt (the second method in your examples), requires detailed knowledge of C++ signature arguments, and is easy to get wrong. Also, from a python perspective, the syntax is needlessly verbose, and rather "unpythonic". Because of this, PySide/PyQt have added some syntactic "sugar" which provides an alternative, more pythonic, syntax for signals and slots. This is used by the first method in your examples.
The specific reason why QString has to be used in the second method, is that Qt signals are defined statically as part of a C++ class. So the argument to SIGNAL has to quite precisely match the C++ definition in order for the connection to be successful. But note that the argument to SIGNAL doesn't require the QString class to be available - it just requires the use of the string "QString" in the signature.
The "old", C++ way of connecting signals will probably remain available for quite some time (for backward compatibillity if nothing else). But it's generally better to use the "new", pythonic way whenever possible. It's just much clearer, and more readable.
In short: efficiency.
A QString is made up of QChar(s). These provide cross platform compatibility between the C++ and Python bindings (as well as for easier language translation); a string in Python versus a string in C++ are usually different. Qt provides their own.
textChanged(QString) uses QString because ...
1) it can provide a more agnostic type between the language bindings,
2) it avoids the type conversion that happens in the first example and is more efficient.
Here is the detailed description of QString. Note the link on implicit sharing.
Here are examples of other possibilities that are more idiomatic for PySide.
To ask my very specific question I find I need quite a long introduction to motivate and explain it -- I promise there's a proper question at the end!
While reading part of a large Python codebase, sometimes one comes across code where the interface required of an argument is not obvious from "nearby" code in the same module or package. As an example:
def make_factory(schema):
entity = schema.get_entity()
...
There might be many "schemas" and "factories" that the code deals with, and "def get_entity()" might be quite common too (or perhaps the function doesn't call any methods on schema, but just passes it to another function). So a quick grep isn't always helpful to find out more about what "schema" is (and the same goes for the return type). Though "duck typing" is a nice feature of Python, sometimes the uncertainty in a reader's mind about the interface of arguments passed in as the "schema" gets in the way of quickly understanding the code (and the same goes for uncertainty about typical concrete classes that implement the interface). Looking at the automated tests can help, but explicit documentation can be better because it's quicker to read. Any such documentation is best when it can itself be tested so that it doesn't get out of date.
Doctests are one possible approach to solving this problem, but that's not what this question is about.
Python 3 has a "parameter annotations" feature (part of the function annotations feature, defined in PEP 3107). The uses to which that feature might be put aren't defined by the language, but it can be used for this purpose. That might look like this:
def make_factory(schema: "xml_schema"):
...
Here, "xml_schema" identifies a Python interface that the argument passed to this function should support. Elsewhere there would be code that defines that interface in terms of attributes, methods & their argument signatures, etc. and code that allows introspection to verify whether particular objects provide an interface (perhaps implemented using something like zope.interface / zope.schema). Note that this doesn't necessarily mean that the interface gets checked every time an argument is passed, nor that static analysis is done. Rather, the motivation of defining the interface is to provide ways to write automated tests that verify that this documentation isn't out of date (they might be fairly generic tests so that you don't have to write a new test for each function that uses the parameters, or you might turn on run-time interface checking but only when you run your unit tests). You can go further and annotate the interface of the return value, which I won't illustrate.
So, the question:
I want to do exactly that, but using Python 2 instead of Python 3. Python 2 doesn't have the function annotations feature. What's the "closest thing" in Python 2? Clearly there is more than one way to do it, but I suspect there is one (relatively) obvious way to do it.
For extra points: name a library that implements the one obvious way.
Take a look at plac that uses annotations to define a command-line interface for a script. On Python 2.x it uses plac.annotations() decorator.
The closest thing is, I believe, an annotation library called PyAnno.
From the project webpage:
"The Pyanno annotations have two functions:
Provide a structured way to document Python code
Perform limited run-time checking "
I've discovered a function in the Python C API named PyEval_CallFunction which seems to be useful. It allows you to invoke a Python callable by saying something like:
PyEval_CallFunction(obj, "OOO", a, b, c);
However, I can't find any official documentation on this function. A google search brings up various unofficial tutorials which discuss this function, but:
The function isn't
documented in the official
Python docs, so I don't know if it's
even something that is supposed to
be part of the public API.
Searching the web turns up
inconsistent usage policies. Some
tutorials indicate the
format string needs parenthesis
around the type list, like
"(OiiO)", whereas other times I
see it used without the parenthesis.
When I actually try the function in
a real program, it seems to require
the parenthesis, otherwise it
segfaults.
I'd like to use this function because it's convenient. Does anyone know anything about this, or know why it isn't documented? Is it part of the public API?
I couldn't find many references to it either, but the tutorial you linked to mentions this:
The string format and the following
arguments are as for Py_BuildValue
(XXX so i really should have described
that by now!). A call such as
PyEval_CallFunction(obj, "iii", a, b, c);
is equivalent to
PyEval_CallObject(obj, Py_BuildValue("iii", a, b, c));
I suppose PyEval_CallFunction is not public API, as its value seems rather limited. There is not much of a difference between these two. But then again, I'm not really involved in python extensions, so this is just my view on this.
PyEval_CallObject itself is just a macro around PyEval_CallObjectWithKeywords.
#define PyEval_CallObject(func,arg) \
PyEval_CallObjectWithKeywords(func, arg, (PyObject *)NULL)
On the matter of "What is public API?" here is a recent message from Martin v. Löwis:
Just to stress and support Georg's
explanation: the API is not defined
through the documentation, but instead
primarily through the header files.
All functions declared as PyAPI_FUNC
and not starting with _Py are public
API. There used to be a lot of undocumented API (up to 1.4, there was no API documentation at all, only the extension module tutorial); these days, more and more API gets documented.
http://mail.python.org/pipermail/python-dev/2011-February/107973.html
The reason it isn't documented is because you should be using PyObject_CallFunction instead.
The PyEval_* function family are the raw internal calls for the interpreter evaluation loop. The corresponding documented PyObject_* calls include all the additional interpreter state integrity checks, argument validation and stack protection.
This may seem like a weird question, but I would like to know how I can run a function in a .dll from a memory 'signature'. I don't understand much about how it actually works, but I needed it badly. Its a way of running unexported functions from within a .dll, if you know the memory signature and adress of it.
For example, I have these:
respawn_f "_ZN9CCSPlayer12RoundRespawnEv"
respawn_sig "568BF18B06FF90B80400008B86E80D00"
respawn_mask "xxxxx?xxx??xxxx?"
And using some pretty nifty C++ code you can use this to run functions from within a .dll.
Here is a well explained article on it:
http://wiki.alliedmods.net/Signature_Scanning
So, is it possible using Ctypes or any other way to do this inside python?
If you can already run them using C++ then you can try using SWIG to generate python wrappers for the C++ code you've written making it callable from python.
http://www.swig.org/
Some caveats that I've found using SWIG:
Swig looks up types based on a string value. For example
an integer type in Python (int) will look to make sure
that the cpp type is "int" otherwise swig will complain
about type mismatches. There is no automatic conversion.
Swig copies source code verbatim therefore even objects in the same namespace
will need to be fully qualified so that the cxx file will compile properly.
Hope that helps.
You said you were trying to call a function that was not exported; as far as I know, that's not possible from Python. However, your problem seems to be merely that the name is mangled.
You can invoke an arbitrary export using ctypes. Since the name is mangled, and isn't a valid Python identifier, you can use getattr().
Another approach if you have the right information is to find the export by ordinal, which you'd have to do if there was no name exported at all. One way to get the ordinal would be using dumpbin.exe, included in many Windows compiled languages. It's actually a front-end to the linker, so if you have the MS LinK.exe, you can also use that with appropriate commandline switches.
To get the function reference (which is a "function-pointer" object bound to the address of it), you can use something like:
import ctypes
func = getattr(ctypes.windll.msvcrt, "##myfunc")
retval = func(None)
Naturally, you'd replace the 'msvcrt' with the dll you specifically want to call.
What I don't show here is how to unmangle the name to derive the calling signature, and thus the arguments necessary. Doing that would require a demangler, and those are very specific to the brand AND VERSION of C++ compiler used to create the DLL.
There is a certain amount of error checking if the function is stdcall, so you can sometimes fiddle with things till you get them right. But if the function is cdecl, then there's no way to automatically check. Likewise you have to remember to include the extra this parameter if appropriate.