Cython equivalent of c define
#define myfunc(Node x,...) SetNode(x.getattributeNode(),__VA_ARGS__)
I have a c api SetNode which takes first argument a node of struct type node and N variables (N is variable number from 0-N)
here is a c example to solve such problum
exampleAPI.c
#include<stdarg.h>
float sumN(int len,...){
va_list argp;
int i;
float s=0;
va_start(argp,len);
for(i=0;i<len;i++){
s+=va_arg(argp,int);
}
va_end(argp);
}
exampleAPI.h
#include<stdarg.h>
float sumN(int len,...)
examplecode.c
#include<stdarg.h>
#include"exampleAPI.h"
int len(float first,...){
va_list argp;
int i=1;
va_start(argp,first);
while(1){
if(va_arg(argp,float)==NULL){
return i
}
else{
i++;
}
}
va_end(argp);
}
#define sum(...) sumN(len(__VA_ARGS__),__VA_ARGS__)
Now calling
sum(1,2,3,4);
will return 10.000000
sum(1.5,6.5);
will return 8.00000
I need a cython alternative for bellow c definition and not above example
because I have a C-API which has SetNode function which takes variable number of arguments and I want to wrap it in cython and call from python
#define myfunc(Node x,...) SetNode(x.getattributeNode(),__VA_ARGS__)
here Node is a class defined in cython which holds a c stuct as attribute and getattributeNode() is a function of Node class which returns c struct that needs to be passed into C-API.
cdef extern "Network.h":
ctypedef struct node_bn:
pass
node_bn* SetNode(node_bn* node,...)
cdef class Node:
cdef node_bn *node
cdef getattributeNode(self):
return self.node
def setNode(self,*arg):
self.node=SetNode(self.node,*arg) # Error cannot convert python objects to c type
Alternative thing I tried
cdef extern from "stdarg.h":
ctypedef struct va_list:
pass
ctypedef struct fake_type:
pass
void va_start(va_list, void* arg)
void* va_arg(va_list, fake_type)
void va_end(va_list)
fake_type int_type "int"
cdef extern "Network.h":
ctypedef struct node_bn:
pass
node_bn* VSetNode(node_bn* node,va_list argp)
cdef class Node:
cdef node_bn *node
cdef getattributeNode(self):
return self.node
cpdef _setNode(self,node_bn *node,...):
cdef va_list agrp
va_start(va_list, node)
self.node=VSetNode(node,argp)
va_end(va_list)
def setNode(self,*arg):
self._setNode(self.node,*arg)
works fine when argument list is empty
n = Node()
n.setNode() #This works
n.SetNode("top",1) # error takes exactly one argument given 3 in self._setNode(self.node,*arg)
If anyone could suggest cython equivalent to it, it would be great.
I don't think it's easily done though Cython (the problem is telling Cython what type conversions to do for an arbitrary number of arguments). The best I can suggest is to use the standard library ctypes library for this specific case and wrap the rest in Cython.
For the sake of an example, I've used a very simple sum function. va_sum.h contains:
typedef struct { double val; } node_bn;
node_bn* sum_va(node_bn* node,int len, ...);
/* on windows this must be:
__declspec(dllexport) node_bn* sum_va(node_bn* node,int len, ...);
*/
and va_sum.c contains:
#include <stdarg.h>
#include "va_sum.h"
node_bn* sum_va(node_bn* node,int len, ...) {
int i;
va_list vl;
va_start(vl,len);
for (i=0; i<len; ++i) {
node->val += va_arg(vl,double);
}
va_end(vl);
return node;
}
I've written it so it adds everything to a field in a structure just to demonstrate that you can pass pointers to structures without too much trouble.
The Cython file is:
# definition of a structure
cdef extern from "va_sum.h":
ctypedef struct node_bn:
double val;
# obviously you'll want to wrap things in terms of Python accessible classes, but this atleast demonstrates how it works
def test_sum(*args):
cdef node_bn input_node;
cdef node_bn* output_node_p;
input_node.val = 5.0 # create a node, and set an initial value
from ctypes import CDLL, c_double,c_void_p
import os.path
# load the Cython library with ctypes to gain access to the "sum_va" function
# Assuming you've linked it in when you build the Cython module
full_path = os.path.realpath(__file__)
this_file_library = CDLL(full_path)
# I treat all my arguments as doubles - you may need to do
# something more sophisticated, but the idea is the same:
# convert them to the c-type the function is expecting
args = [ c_double(arg) for arg in args ]
sum_va = this_file_library.sum_va
sum_va.restype = c_void_p # it returns a pointer
# pass the pointers as a void pointer
# my c compiler warns me if I used int instead of long
# but which integer type you have to use is likely system dependent
# and somewhere you have to be careful
output_node_p_as_integer = sum_va(c_void_p(<long>&input_node),len(args),
*args)
# unfortunately getting the output needs a two stage typecast - first to a long, then to a pointer
output_node_p = <node_bn*>(<long>(output_node_p_as_integer))
return output_node_p.val
You need to compile your va_sum.c together with your Cython file (e.g. by adding sources = ['cython_file.pyx','va_sum.c'] in setup.py)
Ctypes is probably a bit slower than Cython (I think there's a reasonable overhead on each call), and it's odd to mix them, but this should at least let you write the main wrapper in Cython, and use ctypes to get round the specific limitation.
This is probably not the proper answer, since I am not sure I understand the question fully. I would have replied in a comment, but the code formatting is too poor.
In Python the functions sum and len are available:
def my_len(*args):
return len(args)
def my_sum(*args):
return sum(args)
print "len =", my_len("hello", 123, "there")
print "sum =", my_sum(6.5, 1.5, 2.0)
outputs:
len = 3
sum = 10.0
Related
I only recently started using cppyy and ctypes, so this may be a bit of a silly question. I have the following C++ function:
float method(const char* args[]) {
...
}
and from Python I want to pass args as a list of strings, i.e.:
args = *magic*
x = cppyy.gbl.method(args)
I have previously found this, so I used
def setParameters(strParamList):
numParams = len(strParamList)
strArrayType = ct.c_char_p * numParams
strArray = strArrayType()
for i, param in enumerate(strParamList):
strArray[i] = param
lib.SetParams(numParams, strArray)
and from Python:
args = setParameters([b'hello', b'world'])
c_types.c_char_p expects a bytes array. However, when calling x = cppyy.gbl.method(args) I get
TypeError: could not convert argument 1 (could not convert argument to buffer or nullptr)
I'm not entirely sure why this would be wrong since the args is a <__main__.c_char_p_Array_2> object, which I believe should be converted to a const char* args[].
For the sake of having a concrete example, I'll use this as the .cpp file:
#include <cstdlib>
extern "C"
float method(const char* args[]) {
float sum = 0.0f;
const char **p = args;
while(*p) {
sum += std::atof(*p++);
}
return sum;
}
And I'll assume it was compiled with g++ method.cpp -fPIC -shared -o method.so. Given those assumptions, here's an example of how you could use it from Python:
#!/usr/bin/env python3
from ctypes import *
lib = CDLL("./method.so")
lib.method.restype = c_float
lib.method.argtypes = (POINTER(c_char_p),)
def method(args):
return lib.method((c_char_p * (len(args) + 1))(*args))
print(method([b'1.23', b'45.6']))
We make a C array to hold the Python arguments. len(args) + 1 makes sure there's room for the null pointer sentinel.
ctypes does not have a public API that is usable from C/C++ for extension writers, so the handling of ctypes by cppyy is by necessity somewhat clunky. What's going wrong, is that the generated ctypes array of const char* is of type const char*[2] not const char*[] and since cppyy does a direct type match for ctypes types, that fails.
As-is, some code somewhere needs to do a conversion of the Python strings to low-level C ones, and hold on to that memory for the duration of the call. Me, personally, I'd use a little C++ wrapper, rather than having to think things through on the Python side. The point being that an std::vector<std::string> can deal with the necessary conversions (so no bytes type needed, for example, but of course allowed if you want to) and it can hold the temporary memory.
So, if you're given some 3rd party interface like this (putting it inline for cppyy only for the sake of the example):
import cppyy
cppyy.cppdef("""
float method(const char* args[], int len) {
for (int i = 0; i < len; ++i)
std::cerr << args[i] << " ";
std::cerr << std::endl;
return 42.f;
}
""")
Then I'd generate a wrapper:
# write a C++ wrapper to hide C code
cppyy.cppdef("""
namespace MyCppAPI {
float method(const std::vector<std::string>& args) {
std::vector<const char*> v;
v.reserve(args.size());
for (auto& s : args) v.push_back(s.c_str());
return ::method(v.data(), v.size());
}
}
""")
Then replace the original C API with the C++ version:
# replace C version with C++ one for all Python users
cppyy.gbl.method = cppyy.gbl.MyCppAPI.method
and things will be as expected for any other person downstream:
# now use it as expected
cppyy.gbl.method(["aap", "noot", "mies"])
All that said, obviously there is no reason why cppyy couldn't do this bit of wrapping automatically. I created this issue: https://bitbucket.org/wlav/cppyy/issues/235/automatically-convert-python-tuple-of
I wrote a C application which uses some python code, wrapped in cython, to simplify some stuffs.What I want to do is to return an array from a python function, callable in C.
main.c
PyImport_AppendInittab("wrapper", PyInit_libwrapper);
Py_Initialize();
PyObject *module = PyImport_ImportModule("wrapper");
char *names[] = get_names();
Py_Finalize();
wrapper.pyx
cdef public void get_names():
names = []
names.append('ABCD')
names.append('1234')
names.append('abcd')
return names
return names_size
Is it possible to return an array, of chars* in that case?
To handle the array in C I also need to have its size, but I can't pass an int size by reference.
What's the best way to handle this?
I am wrapping a C-library with Cython and for now I do not know how to workaround passing address to C function from python. Details below:
I have some C-function that takes address to some defined beforehand C variable and changes its value:
void c_func(int* x) {
*x=5;
}
As a C user I can use this function in following way:
def int var1
def int var2
c_func(&var1)
c_func(&var2)
After execution both var1 and var2 will be equal to 5. Now I want to wrap c_func with cython. I'd like to have py_func that might be imported from wrapper package and used, but I do not know how to define c variables from python.
What I have already done(in jupyter):
%load_ext cython
%%cython
cdef int some_int
cdef extern from *:
"""
void c_func(int* x) {
*x=5;
}
"""
int c_func(int* x)
c_func(&some_int)
print(some_int)
What I want to get:
%%cython
# This part should be in separate pyx file
cdef extern from *:
"""
void c_func(int* x) {
*x=5;
}
"""
int c_func(int* x)
def py_func(var):
c_func(&var)
# The following part is user API
from wrapper import py_func
var_to_pass_in_py_func = ... # somehow defined C variable
py_func(var_to_pass_in_py_func)
print(var_to_pass_in_py_func) # should print 5
var_to_pass_in_py_func might not be converted to python object, but C functions wrapped with python should not conflict with it.
Is it possible?
I have no idea how your example makes sense in practice, but one possible way is to pass a buffer, which managed by python, to the C function. For example:
%%cython -a -f
# Suppose this is the external C function needed to wrap
cdef void c_func(int* value_to_change):
value_to_change[0] = 123;
def py_func(int[:] buffer_to_change):
c_func(&buffer_to_change[0]);
from array import array
from ctypes import *
a = array('i', [0]);
py_func(a)
print(a[0])
b = (c_int*1)() # c_int array with length 1
py_func(b)
print(b[0])
I have a C function which signature looks like this:
typedef double (*func_t)(double*, int)
int some_f(func_t myFunc);
I would like to pass a Python function (not necessarily explicitly) as an argument for some_f. Unfortunately, I can't afford to alter declaration of some_f, that's it: I shouldn't change C code.
One obvious thing I tried to do is to create a basic wrapping function like this:
cdef double wraping_f(double *d, int i /*?, object f */):
/*do stuff*/
return <double>f(d_t)
However, I can't come up with a way to actually "put" it inside wrapping_f's body.
There is a very bad solution to this problem: I could use a global object variable, however this forces me copy-n-paste multiple instances of essentially same wrapper function that will use different global functions (I am planning to use multiple Python functions simultaneously).
I keep my other answer for historical reasons - it shows, that there is no way to do what you want without jit-compilation and helped me to understand how great #DavidW's advise in this answer was.
For the sake of simplicity, I use a slightly simpler signature of functions and trust you to change it accordingly to your needs.
Here is a blueprint for a closure, which lets ctypes do the jit-compilation behind the scenes:
%%cython
#needs Cython > 0.28 to run because of verbatim C-code
cdef extern from *: #fill some_t with life
"""
typedef int (*func_t)(int);
static int some_f(func_t fun){
return fun(42);
}
"""
ctypedef int (*func_t)(int)
int some_f(func_t myFunc)
#works with any recent Cython version:
import ctypes
cdef class Closure:
cdef object python_fun
cdef object jitted_wrapper
def inner_fun(self, int arg):
return self.python_fun(arg)
def __cinit__(self, python_fun):
self.python_fun=python_fun
ftype = ctypes.CFUNCTYPE(ctypes.c_int,ctypes.c_int) #define signature
self.jitted_wrapper=ftype(self.inner_fun) #jit the wrapper
cdef func_t get_fun_ptr(self):
return (<func_t *><size_t>ctypes.addressof(self.jitted_wrapper))[0]
def use_closure(Closure closure):
print(some_f(closure.get_fun_ptr()))
And now using it:
>>> cl1, cl2=Closure(lambda x:2*x), Closure(lambda x:3*x)
>>> use_closure(cl1)
84
>>> use_closure(cl2)
126
This answer is more in Do-It-Yourself style and while not unintersting you should refer to my other answer for a concise recept.
This answer is a hack and a little bit over the top, it only works for Linux64 and probably should not be recommended - yet I just cannot stop myself from posting it.
There are actually four versions:
how easy the life could be, if the API would take the possibility of closures into consideration
using a global state to produce a single closure [also considered by you]
using multiple global states to produce multiple closures at the same time [also considered by you]
using jit-compiled functions to produce an arbitrary number of closures at the same time
For the sake of simplicity I chose a simpler signature of func_t - int (*func_t)(void).
I know, you cannot change the API. Yet I cannot embark on a journey full of pain, without mentioning how simple it could be... There is a quite common trick to fake closures with function pointers - just add an additional parameter to your API (normally void *), i.e:
#version 1: Life could be so easy
# needs Cython >= 0.28 because of verbatim C-code feature
%%cython
cdef extern from *: #fill some_t with life
"""
typedef int (*func_t)(void *);
static int some_f(func_t fun, void *params){
return fun(params);
}
"""
ctypedef int (*func_t)(void *)
int some_f(func_t myFunc, void *params)
cdef int fun(void *obj):
print(<object>obj)
return len(<object>obj)
def doit(s):
cdef void *params = <void*>s
print(some_f(&fun, params))
We basically use void *params to pass the inner state of the closure to fun and so the result of fun can depend on this state.
The behavior is as expected:
>>> doit('A')
A
1
But alas, the API is how it is. We could use a global pointer and a wrapper to pass the information:
#version 2: Use global variable for information exchange
# needs Cython >= 0.28 because of verbatim C-code feature
%%cython
cdef extern from *:
"""
typedef int (*func_t)();
static int some_f(func_t fun){
return fun();
}
static void *obj_a=NULL;
"""
ctypedef int (*func_t)()
int some_f(func_t myFunc)
void *obj_a
cdef int fun(void *obj):
print(<object>obj)
return len(<object>obj)
cdef int wrap_fun():
global obj_a
return fun(obj_a)
cdef func_t create_fun(obj):
global obj_a
obj_a=<void *>obj
return &wrap_fun
def doit(s):
cdef func_t fun = create_fun(s)
print(some_f(fun))
With the expected behavior:
>>> doit('A')
A
1
create_fun is just convenience, which sets the global object and return the corresponding wrapper around the original function fun.
NB: It would be safer to make obj_a a Python-object, because void * could become dangling - but to keep the code nearer to versions 1 and 4 we use void * instead of object.
But what if there are more than one closure in use at the same time, let's say 2? Obviously with the approach above we need 2 global objects and two wrapper functions to achieve our goal:
#version 3: two function pointers at the same time
%%cython
cdef extern from *:
"""
typedef int (*func_t)();
static int some_f(func_t fun){
return fun();
}
static void *obj_a=NULL;
static void *obj_b=NULL;
"""
ctypedef int (*func_t)()
int some_f(func_t myFunc)
void *obj_a
void *obj_b
cdef int fun(void *obj):
print(<object>obj)
return len(<object>obj)
cdef int wrap_fun_a():
global obj_a
return fun(obj_a)
cdef int wrap_fun_b():
global obj_b
return fun(obj_b)
cdef func_t create_fun(obj) except NULL:
global obj_a, obj_b
if obj_a == NULL:
obj_a=<void *>obj
return &wrap_fun_a
if obj_b == NULL:
obj_b=<void *>obj
return &wrap_fun_b
raise Exception("Not enough slots")
cdef void delete_fun(func_t fun):
global obj_a, obj_b
if fun == &wrap_fun_a:
obj_a=NULL
if fun == &wrap_fun_b:
obj_b=NULL
def doit(s):
ss = s+s
cdef func_t fun1 = create_fun(s)
cdef func_t fun2 = create_fun(ss)
print(some_f(fun2))
print(some_f(fun1))
delete_fun(fun1)
delete_fun(fun2)
After compiling, as expected:
>>> doit('A')
AA
2
A
1
But what if we have to provide an arbitrary number of function-pointers at the same time?
The problem is, that we need to create the wrapper-functions at the run time, because there is no way to know how many we will need while compiling, so the only thing I can think of is to jit-compile these wrapper-functions when they are needed.
The wrapper function looks quite simple, here in assembler:
wrapper_fun:
movq address_of_params, %rdi ; void *param is the parameter of fun
movq address_of_fun, %rax ; addresse of the function which should be called
jmp *%rax ;jmp instead of call because it is last operation
The addresses of params and of fun will be known at run time, so we just have to link - replace the placeholder in the resulting machine code.
In my implementation I'm following more or less this great article: https://eli.thegreenplace.net/2017/adventures-in-jit-compilation-part-4-in-python/
#4. version: jit-compiled wrapper
%%cython
from libc.string cimport memcpy
cdef extern from *:
"""
typedef int (*func_t)(void);
static int some_f(func_t fun){
return fun();
}
"""
ctypedef int (*func_t)()
int some_f(func_t myFunc)
cdef extern from "sys/mman.h":
void *mmap(void *addr, size_t length, int prot, int flags,
int fd, size_t offset);
int munmap(void *addr, size_t length);
int PROT_READ # #define PROT_READ 0x1 /* Page can be read. */
int PROT_WRITE # #define PROT_WRITE 0x2 /* Page can be written. */
int PROT_EXEC # #define PROT_EXEC 0x4 /* Page can be executed. */
int MAP_PRIVATE # #define MAP_PRIVATE 0x02 /* Changes are private. */
int MAP_ANONYMOUS # #define MAP_ANONYMOUS 0x20 /* Don't use a file. */
# |-----8-byte-placeholder ---|
blue_print = b'\x48\xbf\x00\x00\x00\x00\x00\x00\x00\x00' # movabs 8-byte-placeholder,%rdi
blue_print+= b'\x48\xb8\x00\x00\x00\x00\x00\x00\x00\x00' # movabs 8-byte-placeholder,%rax
blue_print+= b'\xff\xe0' # jmpq *%rax ; jump to address in %rax
cdef func_t link(void *obj, void *fun_ptr) except NULL:
cdef size_t N=len(blue_print)
cdef char *mem=<char *>mmap(NULL, N,
PROT_READ | PROT_WRITE | PROT_EXEC,
MAP_PRIVATE | MAP_ANONYMOUS,
-1,0)
if <long long int>mem==-1:
raise OSError("failed to allocated mmap")
#copy blueprint:
memcpy(mem, <char *>blue_print, N);
#inject object address:
memcpy(mem+2, &obj, 8);
#inject function address:
memcpy(mem+2+8+2, &fun_ptr, 8);
return <func_t>(mem)
cdef int fun(void *obj):
print(<object>obj)
return len(<object>obj)
cdef func_t create_fun(obj) except NULL:
return link(<void *>obj, <void *>&fun)
cdef void delete_fun(func_t fun):
munmap(fun, len(blue_print))
def doit(s):
ss, sss = s+s, s+s+s
cdef func_t fun1 = create_fun(s)
cdef func_t fun2 = create_fun(ss)
cdef func_t fun3 = create_fun(sss)
print(some_f(fun2))
print(some_f(fun1))
print(some_f(fun3))
delete_fun(fun1)
delete_fun(fun2)
delete_fun(fun3)
And now, the expected behavior:
>>doit('A')
AA
2
A
1
AAA
3
After looking at this, maybe there is a change the API can be changed?
(I think this question can easily be answered by an expert without an actual copy-paste-working-example, so I did not spent extra time on it…)
I have a C++ method, which returns an array of integers:
int* Narf::foo() {
int bar[10];
for (int i = 0; i < 10; i++) {
bar[i] = i;
}
return bar;
}
I created the Cython stuff for its class:
cdef extern from "Narf" namespace "narf":
cdef cppclass Narf:
Narf() except +
int* foo()
And these are my Python wrappers:
cdef class PyNarf:
cdef Narf c_narf
def __cinit__(self):
self.c_narf = Narf()
def foo(self):
return self.c_narf.foo()
The problem is the foo method, with its int* return type (other methods which I did not list in this example work perfectly fine!). It does not compile and gives the following error:
def foo(self):
return self.c_narf.foo()
^
------------------------------------------------------------
narf.pyx:39:37: Cannot convert 'int *' to Python object
Of course, it obviously does not accept int * as return type. How do I solve this problem? Is there an easy way of wrapping this int * into a numpy array (I'd prefer numpy), or how is this supposed to be handled?
I'm also not sure how to handle the memory here, since I'm reading in large files etc.
To wrap it a numpy array, you need to know the size, then you can do it like this:
def foo(self):
cdef int[::1] view = <int[:self.c_narf.size()]> self.c_narf.foo()
return np.asarray(view)
The above code assumes that there exists a function self.c_narf.size() that returns the size of the array.
This looks like it can be solved using the solution to this question: Have pointer to data. Need Numpy array in Fortran order. Want to use Cython