For all intents and purposes, an Objective-C method declaration is
simply a C function that prepends two additional parameters (see
“Messaging” in the Objective-C Runtime Programming Guide ).
Thus, the structure of an Objective-C method declaration differs from the structure of a method that uses named or keyword parameters
in a language like Python, as the following Python example
illustrates:
In this Python example, Thing and NeatMode might be omitted or might have different values when called.
def func(a, b, NeatMode=SuperNeat, Thing=DefaultThing):
pass
What's the goal of showing this example on an Objective-c related book?
This is a (poor) example of how Objective-C does not support certain features that other languages, (for example, Python) may. The text explains that while Objective-C has "named parameters" of the format
- (void)myMethodWithArgument:(NSObject *)argument andArgument:(NSObject *)another;
Those parameters do not support defaults values, which Python does.
The mention of prepending two arguments hints at how message passing in Objective-C works under the hood, which is by prepending each method with a receiver object and a selector. You don't need to know this detail in order to write code in Objective-C, especially at a beginner level, but Apple explains this process here.
def func(a, b, NeatMode=SuperNeat, Thing=DefaultThing):
pass
NeatMode, Thing are optional named parameters
in objective c they would be
- (void) func:(int)a :(int)b NeatMode:(object*)SuperNeat Thing:(object*)DefaultThing
Pleas read more about this subject
http://www.diveintopython.net/power_of_introspection/optional_arguments.html
I think the point here is to differentiate between how you are "used" to receive parameters in functions and how objective-c does. Normally:
public void accumulate(double value, double value1) {
}
And in objective-c:
-(void)accumulateDouble:(double)aDouble withAnotherDouble:(double)anotherDouble{
}
Related
I am creating Python bindings for a C library.
In C the code to use the functions would look like this:
Ihandle *foo;
foo = MethFunc();
SetArribute(foo, 's');
I am trying to get this into Python. Where I have MethFunc() and SetAttribute() functions that could be used in my Python code:
import mymodule
foo = mymodule.MethFunc()
mymodule.SetAttribute(foo)
So far my C code to return the function looks like this:
static PyObject * _MethFunc(PyObject *self, PyObject *args) {
return Py_BuildValue("O", MethFunc());
}
But that fails by crashing (no errors)
I have also tried return MethFunc(); but that failed.
How can I return the function foo (or if what I am trying to achieve is completely wrong, how should I go about passing MethFunc() to SetAttribute())?
The problem here is that MethFunc() returns an IHandle *, but you're telling Python to treat it as a PyObject *. Presumably those are completely unrelated types.
A PyObject * (or any struct you or Python defines that starts with an appropriate HEAD macro) begins with pointers to a refcount and a type, and the first thing Python is going to do with any object you hand it is deal with those pointers. So, if you give it an object that instead starts with, say, two ints, Python is going to end up trying to access a type at 0x00020001 or similar, which is almost certain to segfault.
If you need to pass around a pointer to some C object, you have to wrap it up in a Python object. There are three ways to do this, from hackiest to most solid.
First, you can just cast the IHandle * to a size_t, then PyLong_FromSize_t it.
This is dead simple to implement. But it means these objects are going to look exactly like numbers from the Python side, because that's all they are.
Obviously you can't attach a method to this number; instead, your API has to be a free function that takes a number, then casts that number back to an IHandle* and calls a method.
It's more like, e.g., C's stdio, where you have to keep passing stdin or f as an argument to fread, instead of Python's io, where you call methods on sys.stdin or f.
But even worse, because there's no type checking, static or dynamic, to protect you from some Python code accidentally passing you the number 42. Which you'll then cast to an IHandle * and try to dereference, leading to a segfault…
And if you were hoping Python's garbage collector would help you know when the object is still referenced, you're out of luck. You need to make your users manually keep track of the number and call some CloseHandle function when they're done with it.
Really, this isn't that much better than accessing your code from ctypes, so hopefully that inspires you to keep reading.
A better solution is to cast the IHandle * to a void *, then PyCapsule_New it.
If you haven't read about capsules, you need to at least skim the main chapter. But the basic idea is that it wraps up a void* as a Python object.
So, it's almost as simple as passing around numbers, but solves most of the problems. Capsules are opaque values which your Python users can't accidentally do arithmetic on; they can't send you 42 in place of a capsule; you can attach a function that gets called when the last reference to a capsule goes away; you can even give it a nice name to show up in the repr.
But you still can't attach any behavior to capsules.
So, your API will still have to be a MethSetAttribute(mymodule, foo) instead of mymeth.SetAttribute(foo) if mymodule is a capsule, just as if it's an int. (Except now it's type-safe.)
Finally, you can build a new Python extension type for a struct that contains an IHandle *.
This is a lot more work. And if you haven't read the tutorial on Defining Extension Types, you need to go thoroughly read through that whole chapter.
But it means that you have an actual Python type, with everything that goes with it.
You can give it a SetAttribute method, and Python code can just call that method. You can give it whatever __str__ and __repr__ you want. You can give it a __doc__. Python code can do isinstance(mymodule, MyMeth). And so on.
If you're willing to use C++, or D, or Rust instead of C, there are some great libraries (PyCxx, boost::python, Pyd, rust-python, etc.) that can do most of the boilerplate for you. You just declare that you want a Python class and how you want its attributes and methods bound to your C attributes and methods and you get something you can use like a C++ class, except that it's actually a PyObject * under the covers. (And it'll even takes care of all the refcounting cruft for you via RAII, which will save you endless weekends debugging segfaults and memory leaks…)
Or you can use Cython, which lets you write C extension modules in a language that's basically Python, but extended to interface with C code. So your wrapper class is just a class, but with a special private cdef attribute that holds the IHandle *, and your SetAttribute(self, s) can just call the C SetAttribute function with that private attribute.
Or, as suggested by user, you can also use SWIG to generate the C bindings for you. For simple cases, it's pretty trivial—just feed it your C API, and it gives you back the code to build your Python .so. For less simple cases, I personally find it a lot more painful than something like PyCxx, but it definitely has a lower learning curve if you don't already know C++.
Many languages support ad-hoc polymorphism (a.k.a. function overloading) out of the box. However, it seems that Python opted out of it. Still, I can imagine there might be a trick or a library that is able to pull it off in Python. Does anyone know of such a tool?
For example, in Haskell one might use this to generate test data for different types:
-- In some testing library:
class Randomizable a where
genRandom :: a
-- Overload for different types
instance Randomizable String where genRandom = ...
instance Randomizable Int where genRandom = ...
instance Randomizable Bool where genRandom = ...
-- In some client project, we might have a custom type:
instance Randomizable VeryCustomType where genRandom = ...
The beauty of this is that I can extend genRandom for my own custom types without touching the testing library.
How would you achieve something like this in Python?
Python is not a strongly typed language, so it really doesn't matter if yo have an instance of Randomizable or an instance of some other class which has the same methods.
One way to get the appearance of what you want could be this:
types_ = {}
def registerType ( dtype , cls ) :
types_[dtype] = cls
def RandomizableT ( dtype ) :
return types_[dtype]
Firstly, yes, I did define a function with a capital letter, but it's meant to act more like a class. For example:
registerType ( int , TheLibrary.Randomizable )
registerType ( str , MyLibrary.MyStringRandomizable )
Then, later:
type = ... # get whatever type you want to randomize
randomizer = RandomizableT(type) ()
print randomizer.getRandom()
A Python function cannot be automatically specialised based on static compile-time typing. Therefore its result can only depend on its arguments received at run-time and on the global (or local) environment, unless the function itself is modifiable in-place and can carry some state.
Your generic function genRandom takes no arguments besides the typing information. Thus in Python it should at least receive the type as an argument. Since built-in classes cannot be modified, the generic function (instance) implementation for such classes should be somehow supplied through the global environment or included into the function itself.
I've found out that since Python 3.4, there is #functools.singledispatch decorator. However, it works only for functions which receive a type instance (object) as the first argument, so it is not clear how it could be applied in your example. I am also a bit confused by its rationale:
In addition, it is currently a common anti-pattern for Python code to inspect the types of received arguments, in order to decide what to do with the objects.
I understand that anti-pattern is a jargon term for a pattern which is considered undesirable (and does not at all mean the absence of a pattern). The rationale thus claims that inspecting types of arguments is undesirable, and this claim is used to justify introducing a tool that will simplify ... dispatching on the type of an argument. (Incidentally, note that according to PEP 20, "Explicit is better than implicit.")
The "Alternative approaches" section of PEP 443 "Single-dispatch generic functions" however seems worth reading. There are several references to possible solutions, including one to "Five-minute Multimethods in Python" article by Guido van Rossum from 2005.
Does this count for ad hock polymorphism?
class A:
def __init__(self):
pass
def aFunc(self):
print "In A"
class B:
def __init__(self):
pass
def aFunc(self):
print "In B"
f = A()
f.aFunc()
f = B()
f.aFunc()
output
In A
In B
Another version of polymorphism
from module import aName
If two modules use the same interface, you could import either one and use it in your code.
One example of this is from xml.etree.ElementTree import XMLParser
I am using the PyObject functionality to call c functions, and
return Py_BuildValue("theTypeToConvert", myCVariable);
to return things back to my python program, this all works fine.
However I have a custom C type
extern HANDLE pascal
how do I pass an instance of this back to python so I can give it to other c functions later, the closest I could think of was to use
Py_BuildValue("O&", etc)
but this apparently mangles the variable as I am not getting the correct results later on.
If I understand correctly that you want the object to be "opaque" from the Python perspective, i.e. just a pointer value that you can pass around in Python but not operate on the object it points to, then you might be after the Capsule object.
Official Python docs on capsules:
https://docs.python.org/2/c-api/capsule.html#capsules
See also:
Passing a C pointer around with the Python/C API
I am using Python C API 2.7.2 with my C++ console application. There is one doubt regarding Python C API Boolean Objects
I am using:
PyObject* myVariable = Py_True;
Do I need to deference myVariable with Py_DECREF(myVariable)?
The Python C API documentation says:-
The Python True object. This object has no methods. It needs to be
treated just like any other object with respect to reference counts.
I searched the questions but could not find a clear answer for it.
Thanks.
Although it isn't dynamically created, it must be reference counted because PyObject variables can hold ANY Python object. Otherwise there would need to be checks for Py_True and other special cases scattered throughout the Python runtime as well as any C/C++ code that uses the API. That would be messy and error prone.
It needs to be treated just like any other object with respect to reference counts.
This means that you must incref it when you take its reference
{
Py_INCREF(Py_True);
PyObject* myVariable = Py_True;
and you must decref it when you dispose of it.
Py_DECREF(myVariable);
}
In Python I can see what methods and fields an object has with:
print dir(my_object)
What's the equivalent of that in Groovy (assuming it has one)?
Looks particulary nice in Groovy (untested, taken from this link so code credit should go there):
// Introspection, know all the details about classes :
// List all constructors of a class
String.constructors.each{println it}
// List all interfaces implemented by a class
String.interfaces.each{println it}
// List all methods offered by a class
String.methods.each{println it}
// Just list the methods names
String.methods.name
// Get the fields of an object (with their values)
d = new Date()
d.properties.each{println it}
The general term you are looking for is introspection.
As described here, to find all methods defined for String object:
"foo".metaClass.methods*.name.sort().unique()
It's not as simple as Python version, perhaps somebody else can show better way.
Besides just using the normal Java reflection API, there's:
http://docs.codehaus.org/display/GROOVY/JN3535-Reflection
You can also play games with the metaclasses.