Passing variable from embedded Python to C - python

Can Someone please explain to me how I can pass a Variable from embedded Python to my C Program?
I've looked everywhere on the web and what I found I did not understand, because I know very little Python.
I tried to create a callback function in C, but I did not understand how its supposed to work.
Now, my main program is in C. There I create a Python Object and in a thread and call a Python Function from a Python Script. This Function produces values and these values I need to pass back to the C program for further use.

For embedding, I would recommend looking at the docs on the Python website: https://docs.python.org/3.4/extending/embedding.html#pure-embedding
The section that is of interest to you is:
if (pModule != NULL) {
pFunc = PyObject_GetAttrString(pModule, argv[2]);
/* pFunc is a new reference */
if (pFunc && PyCallable_Check(pFunc)) {
pArgs = PyTuple_New(argc - 3);
for (i = 0; i < argc - 3; ++i) {
pValue = PyLong_FromLong(atoi(argv[i + 3]));
if (!pValue) {
Py_DECREF(pArgs);
Py_DECREF(pModule);
fprintf(stderr, "Cannot convert argument\n");
return 1;
}
/* pValue reference stolen here: */
PyTuple_SetItem(pArgs, i, pValue);
}
pValue = PyObject_CallObject(pFunc, pArgs);
Py_DECREF(pArgs);
if (pValue != NULL) {
printf("Result of call: %ld\n", PyLong_AsLong(pValue));
Py_DECREF(pValue);
}
else {
Py_DECREF(pFunc);
Py_DECREF(pModule);
PyErr_Print();
fprintf(stderr,"Call failed\n");
return 1;
}
}
else {
if (PyErr_Occurred())
PyErr_Print();
fprintf(stderr, "Cannot find function \"%s\"\n", argv[2]);
}
Py_XDECREF(pFunc);
Py_DECREF(pModule);
}
specifically this line:
pValue = PyObject_CallObject(pFunc, pArgs);
this is calling a python function (callable python object), pFunc, with arguments pArgs, and returning a python object pValue.
I would suggest reading through that entire page to get a better understanding of embedding python. Also, since you say you know very little python, I would suggest getting more familiar with the language and how different it is from c/c++. You'll need to know more about how python works before you can be effective with embedding it.
Edit:
If you need to share memory between your c/c++ code and python code (Running in separate thread), I don't believe you can share memory directly, and least not in the way you would normally do with just c/c++. However, you can create a memory mapped file to achieve the same effect with about the same performance. Doing so is going to be platform dependent, and I don't have any experience with doing so, but here is a link that should help: http://www.codeproject.com/Articles/11843/Embedding-Python-in-C-C-Part-II.
Basically, you just create the mmap in c (this is the platform dependent part) and create an mmap to the same file in your python code, then write/read to/from the file descriptor in each of your threads.

Related

Bad access when calling PyObject_CallObject for the second time [duplicate]

On the second call of the following code, my app segfault, so I guess I am missing something :
Py_Initialize();
pName = PyString_FromString("comp_macbeth");
pModule = PyImport_Import(pName);
Py_DECREF(pName);
if(pModule == NULL) {
PyErr_Print();
Py_Finalize();
return;
}
pFunc = PyObject_GetAttrString(pModule, "compute");
/* pFunc is a new reference */
if (!pFunc || !PyCallable_Check(pFunc) ) {
PyErr_Print();
Py_Finalize();
return;
}
Py_Finalize();
The comp_macbeth.py is importing numpy. If I remove the numpy import, everything is fine. Is it a numpy bug, or am I missing something about imports ?
From the Py_Finalize docs:
Some extensions may not work properly if their initialization routine is called more than once; this can happen if an application calls Py_Initialize() and Py_Finalize() more than once.
Apparently Numpy is one of those. See also this message from Numpy-discussion.
Calling Py_Initialize() only once, and cleaning up at exit, is the way to go. (And it's should be faster, too!)
I have this in my module initialization part, but the URL does not exist anymore. In case it helps:
// http://numpy.scipy.org/numpydoc/numpy-13.html mentions this must be done in module init, otherwise we will crash
import_array();

Dymola Segfault in external C function with Python API when using time and numpy [duplicate]

On the second call of the following code, my app segfault, so I guess I am missing something :
Py_Initialize();
pName = PyString_FromString("comp_macbeth");
pModule = PyImport_Import(pName);
Py_DECREF(pName);
if(pModule == NULL) {
PyErr_Print();
Py_Finalize();
return;
}
pFunc = PyObject_GetAttrString(pModule, "compute");
/* pFunc is a new reference */
if (!pFunc || !PyCallable_Check(pFunc) ) {
PyErr_Print();
Py_Finalize();
return;
}
Py_Finalize();
The comp_macbeth.py is importing numpy. If I remove the numpy import, everything is fine. Is it a numpy bug, or am I missing something about imports ?
From the Py_Finalize docs:
Some extensions may not work properly if their initialization routine is called more than once; this can happen if an application calls Py_Initialize() and Py_Finalize() more than once.
Apparently Numpy is one of those. See also this message from Numpy-discussion.
Calling Py_Initialize() only once, and cleaning up at exit, is the way to go. (And it's should be faster, too!)
I have this in my module initialization part, but the URL does not exist anymore. In case it helps:
// http://numpy.scipy.org/numpydoc/numpy-13.html mentions this must be done in module init, otherwise we will crash
import_array();

Python embed into C++

I have python codes ebmedded into C++.
Do I need to release memory(Py_XDECREF) PyObject* pValue and PyObject *pArgs?
When I do Py_XDECREF(pArgs) and Py_XDECREF(pValue) I have Segmentation Fault (Core dumped).
I think python side is still using those variables and c++ try to release memory.
What is the best practice for this issue?
for(int i=0; i < 100: i++){
.......do sth.......
if (pModule != NULL) {
std::string st = jps.updateZone(worldx_y, lenVect);
PyObject* pValue = PyBytes_FromString(st.c_str());
if (pFunc_insert && PyCallable_Check(pFunc_insert)) {
PyObject *pArgs = PyTuple_New(1);
PyTuple_SetItem(pArgs, 0, pValue);
PyObject_CallObject(pFunc_insert, pArgs);
Py_XDECREF(pArgs);
}
Py_XDECREF(pValue);
}
......do sth.......
}
PyTuple_SetItem steals a reference to the item. You don't need to decref the item, because you no longer own a reference to it. You do need to decref the tuple.
If you still get segfaults after that, you have some other bug.

Segmentation fault error in python embedded code in C++ code of Omnet++ simple module

I want to call a Python function from C++ code in OMNeT++ simple module.
I debugged the code using gdb. It passes all lines fine, but at the end the
segmentation fault occurs after Py_Finalize();.
I found the following issue on GitHub that describes the same problem.
But it did not help me to resolve the problem.
double result=0;
// 1) Initialise python interpretator
if (!Py_IsInitialized()) {
Py_Initialize();
//Py_AtExit(Py_Finalize);
}
// 2) Initialise python thread mechanism
if (!PyEval_ThreadsInitialized()) {
PyEval_InitThreads();
assert(PyEval_ThreadsInitialized());
}
PyGILState_STATE s = PyGILState_Ensure();
PyRun_SimpleString("import sys; sys.path.append('/home/mypath/')");
PyObject *pName = PyUnicode_DecodeFSDefault((char*)"integrationTest");
PyObject* pModule = PyImport_Import(pName);
if (pModule != NULL)
{
PyObject* pFunction = PyObject_GetAttrString(pModule, (char*)"calculateExecutionTime");
/// changement will be held in this level Args and function result.
PyObject* pArgs = PyTuple_Pack(2,PyFloat_FromDouble(2.0),PyFloat_FromDouble(8.0));
PyObject* pResult = PyObject_CallObject(pFunction, pArgs);
result = (double)PyFloat_AsDouble(pResult);
///////
}
// Clean up
PyGILState_Release(s);
Py_DECREF(pName);
Py_DECREF(pModule);
Py_Finalize();
The problem occurs after the first initialization/uninitialization of python interpreter. What happens during the OmneT++ simulation is initializing/uninitializing/re-initializing/... the Python interpreter. However, Numpy doesn't support this;
So, I resolved this problem by initializing python interpreter just one time in the beginning of the simulation in initialize() method. Then, I called Py_Finalize(); in the destructor.

Memory deallocation from SWIG typemap

I am trying to fix a memory leak in a Python wrapper for a C++ dll.
The problem is when assigning a byte buffer to a helper object that has been created in Python:
struct ByteBuffer
{
int length;
uint8_t * dataBuf;
};
I want to supply the dataBuf as a Python array, so the typemap that I came up with (and works) is that:
%module(directors="1") mymodule
%typemap(in) uint8_t * (uint8_t *temp){
int length = PySequence_Length($input);
temp = new uint8_t[length]; // memory allocated here. How to free?
for(int i=0; i<length; i++) {
PyObject *o = PySequence_GetItem($input,i);
if (PyNumber_Check(o)) {
temp[i] = (uint8_t) PyLong_AsLong(o);
//cout << (int)temp[i] << endl;
} else {
PyErr_SetString(PyExc_ValueError,"Sequence elements must be uint8_t");
return NULL;
}
}
$1 = temp;
}
The problem is that the typemap allocates memory for a new C array each time and this memory is not freed within the dll. In other words, the dll expects the user to manage the memory of the dataBuf of the ByteBuffer. For example, when creating 10000 such objects sequentially in Python and then deleting them, it the memory usage rises steadily (leak):
for i in range(10000):
byteBuffer = mymodule.ByteBuffer()
byteBuffer.length = 10000
byteBuffer.dataBuf = [0]*10000
# ... use byteBuffer
del byteBuffer
Is there a way to delete the allocated dataBuf from python? Thank you for your patience!
Edit: I don't post the whole working code to keep it short. If required, I'll do it. Additionally, I am using Python 3.5 x64 and SWIG ver 3.0.7
It was far more simple than I thought. I just added that to the .i file
%typemap(freearg) uint8_t * {
//cout << "Freeing uint8_t*!!! " << endl;
if ($1) delete[]($1);
}
Seems to work.
Edit: switched free with delete[]

Categories