On the second call of the following code, my app segfault, so I guess I am missing something :
Py_Initialize();
pName = PyString_FromString("comp_macbeth");
pModule = PyImport_Import(pName);
Py_DECREF(pName);
if(pModule == NULL) {
PyErr_Print();
Py_Finalize();
return;
}
pFunc = PyObject_GetAttrString(pModule, "compute");
/* pFunc is a new reference */
if (!pFunc || !PyCallable_Check(pFunc) ) {
PyErr_Print();
Py_Finalize();
return;
}
Py_Finalize();
The comp_macbeth.py is importing numpy. If I remove the numpy import, everything is fine. Is it a numpy bug, or am I missing something about imports ?
From the Py_Finalize docs:
Some extensions may not work properly if their initialization routine is called more than once; this can happen if an application calls Py_Initialize() and Py_Finalize() more than once.
Apparently Numpy is one of those. See also this message from Numpy-discussion.
Calling Py_Initialize() only once, and cleaning up at exit, is the way to go. (And it's should be faster, too!)
I have this in my module initialization part, but the URL does not exist anymore. In case it helps:
// http://numpy.scipy.org/numpydoc/numpy-13.html mentions this must be done in module init, otherwise we will crash
import_array();
Related
Firstly my problem is similar to this one:
Debugging embedded Python
However although the description in that case is very helpful, it doesn't alas contain all the information needed to debug a mixed C++/embedded Python app.
Background: I have a C++ app that is calling Python code with some argument:
bool
ui::runLUOvershoot4(PyObject*inData)
{
// Get the python module.
PyObject* const pName = PyBytes_FromString("lu_overshoot4");
PyObject* pModule = nullptr;
PyObject* const pMod = PyImport_AddModule("lu_overshoot4");
if (pMod != nullptr) {
pModule = PyImport_ReloadModule(pMod);
}
/* NB pMod is borrowed and must not be Py_DECREF-ed */
if (pModule == nullptr) {
PyErr_Print();
return false;
}
Py_DECREF(pName);
// Get the dictionary for the lu_overshoot4 module
PyObject* const pDict = PyModule_GetDict(pModule);
Py_DECREF(pModule);
// Get the function in the dictionary
PyObject* const pFunc = PyDict_GetItemString(pDict, "LU_Overshoot4");
if (pFunc == nullptr) {
PyErr_Print();
return false;
}
Py_INCREF(pFunc);
if (pFunc && PyCallable_Check(pFunc)) {
// Call the python func with inData as arg
PyObject* arglist = Py_BuildValue("(O)", inData);
PyObject* const pValue = PyObject_Call(pFunc, arglist, nullptr);
if (pValue == nullptr) {
PyErr_Print();
return false;
}
Py_DECREF(pFunc);
return true;
}
return false;
}
All works as expected. But I want to be able to step in the debugger from the C++ PyObject_Call into the python code. So far I have been able to do this only if I start the C++ code without the debugger, then attach the debugger to the running process. If I try and start in the debugger, the breakpoints in the python code are disabled with the tooltip 'The breakpoint will not currently be hit. No symbols have been loaded for this document'.
A little detail on the setup: this is a Visual Studio 2022 solution with a C++ project (with the debugger to launch being the Python/Native Debugger) and a Python project containing the python files. It's not clear to me what settings to give the Python project, what I have is in the General tab,
Startup File -
Working Directory - .
Windows Application - ticked (makes no difference though)
Interpreter - env(Python 3.8 64-bit).
In the Debug tab,
Search Paths - .;\env
Script args -
Interpreter Path - \env
Interpreter Args -
Environment Variables -
Debug - 'Enable native code debugging' checked.
The python project's Python Environment was set up with a virtual env, set to the \env directory, which contains the include/Lib/Scripts/share dirs and python.exe, python38.dll, python.pdb etc. i.e. python installation and pdb files. This is how (I believe) 'Lambda' describes their setup.
One of the issues I faced was that at first VS2022 (v 17.2.6) did not give the option of setting the debugger to python/native; a fix is described in https://learn.microsoft.com/en-us/answers/questions/860222/the-option-34pythonnative-debugging34-is-missing-f.html. But with that set and starting the debugger with F5, it is not possible to step into Python code from C++ or break on Python code - only if debugging is done by attaching to the running process.
Any help gratefully received.
On the second call of the following code, my app segfault, so I guess I am missing something :
Py_Initialize();
pName = PyString_FromString("comp_macbeth");
pModule = PyImport_Import(pName);
Py_DECREF(pName);
if(pModule == NULL) {
PyErr_Print();
Py_Finalize();
return;
}
pFunc = PyObject_GetAttrString(pModule, "compute");
/* pFunc is a new reference */
if (!pFunc || !PyCallable_Check(pFunc) ) {
PyErr_Print();
Py_Finalize();
return;
}
Py_Finalize();
The comp_macbeth.py is importing numpy. If I remove the numpy import, everything is fine. Is it a numpy bug, or am I missing something about imports ?
From the Py_Finalize docs:
Some extensions may not work properly if their initialization routine is called more than once; this can happen if an application calls Py_Initialize() and Py_Finalize() more than once.
Apparently Numpy is one of those. See also this message from Numpy-discussion.
Calling Py_Initialize() only once, and cleaning up at exit, is the way to go. (And it's should be faster, too!)
I have this in my module initialization part, but the URL does not exist anymore. In case it helps:
// http://numpy.scipy.org/numpydoc/numpy-13.html mentions this must be done in module init, otherwise we will crash
import_array();
I want to call a Python function from C++ code in OMNeT++ simple module.
I debugged the code using gdb. It passes all lines fine, but at the end the
segmentation fault occurs after Py_Finalize();.
I found the following issue on GitHub that describes the same problem.
But it did not help me to resolve the problem.
double result=0;
// 1) Initialise python interpretator
if (!Py_IsInitialized()) {
Py_Initialize();
//Py_AtExit(Py_Finalize);
}
// 2) Initialise python thread mechanism
if (!PyEval_ThreadsInitialized()) {
PyEval_InitThreads();
assert(PyEval_ThreadsInitialized());
}
PyGILState_STATE s = PyGILState_Ensure();
PyRun_SimpleString("import sys; sys.path.append('/home/mypath/')");
PyObject *pName = PyUnicode_DecodeFSDefault((char*)"integrationTest");
PyObject* pModule = PyImport_Import(pName);
if (pModule != NULL)
{
PyObject* pFunction = PyObject_GetAttrString(pModule, (char*)"calculateExecutionTime");
/// changement will be held in this level Args and function result.
PyObject* pArgs = PyTuple_Pack(2,PyFloat_FromDouble(2.0),PyFloat_FromDouble(8.0));
PyObject* pResult = PyObject_CallObject(pFunction, pArgs);
result = (double)PyFloat_AsDouble(pResult);
///////
}
// Clean up
PyGILState_Release(s);
Py_DECREF(pName);
Py_DECREF(pModule);
Py_Finalize();
The problem occurs after the first initialization/uninitialization of python interpreter. What happens during the OmneT++ simulation is initializing/uninitializing/re-initializing/... the Python interpreter. However, Numpy doesn't support this;
So, I resolved this problem by initializing python interpreter just one time in the beginning of the simulation in initialize() method. Then, I called Py_Finalize(); in the destructor.
Can Someone please explain to me how I can pass a Variable from embedded Python to my C Program?
I've looked everywhere on the web and what I found I did not understand, because I know very little Python.
I tried to create a callback function in C, but I did not understand how its supposed to work.
Now, my main program is in C. There I create a Python Object and in a thread and call a Python Function from a Python Script. This Function produces values and these values I need to pass back to the C program for further use.
For embedding, I would recommend looking at the docs on the Python website: https://docs.python.org/3.4/extending/embedding.html#pure-embedding
The section that is of interest to you is:
if (pModule != NULL) {
pFunc = PyObject_GetAttrString(pModule, argv[2]);
/* pFunc is a new reference */
if (pFunc && PyCallable_Check(pFunc)) {
pArgs = PyTuple_New(argc - 3);
for (i = 0; i < argc - 3; ++i) {
pValue = PyLong_FromLong(atoi(argv[i + 3]));
if (!pValue) {
Py_DECREF(pArgs);
Py_DECREF(pModule);
fprintf(stderr, "Cannot convert argument\n");
return 1;
}
/* pValue reference stolen here: */
PyTuple_SetItem(pArgs, i, pValue);
}
pValue = PyObject_CallObject(pFunc, pArgs);
Py_DECREF(pArgs);
if (pValue != NULL) {
printf("Result of call: %ld\n", PyLong_AsLong(pValue));
Py_DECREF(pValue);
}
else {
Py_DECREF(pFunc);
Py_DECREF(pModule);
PyErr_Print();
fprintf(stderr,"Call failed\n");
return 1;
}
}
else {
if (PyErr_Occurred())
PyErr_Print();
fprintf(stderr, "Cannot find function \"%s\"\n", argv[2]);
}
Py_XDECREF(pFunc);
Py_DECREF(pModule);
}
specifically this line:
pValue = PyObject_CallObject(pFunc, pArgs);
this is calling a python function (callable python object), pFunc, with arguments pArgs, and returning a python object pValue.
I would suggest reading through that entire page to get a better understanding of embedding python. Also, since you say you know very little python, I would suggest getting more familiar with the language and how different it is from c/c++. You'll need to know more about how python works before you can be effective with embedding it.
Edit:
If you need to share memory between your c/c++ code and python code (Running in separate thread), I don't believe you can share memory directly, and least not in the way you would normally do with just c/c++. However, you can create a memory mapped file to achieve the same effect with about the same performance. Doing so is going to be platform dependent, and I don't have any experience with doing so, but here is a link that should help: http://www.codeproject.com/Articles/11843/Embedding-Python-in-C-C-Part-II.
Basically, you just create the mmap in c (this is the platform dependent part) and create an mmap to the same file in your python code, then write/read to/from the file descriptor in each of your threads.
I am trying to use Python in C++ and have the following code. I intended to parse and do sys.path.append on a user input path. It looks like the call to PyRun_SimpleString caused some sort of spillage into a private class var of the class. How did this happen? I have tried various buffer size 50, 150, 200, and it did not change the output.
class Myclass
{
...
private:
char *_modName;
char *_modDir;
};
Myclass::Myclass()
{
Py_Initialize();
PyRun_SimpleString("import sys");
PyRun_SimpleString((char *)"sys.path.append('/home/userA/Python')");
}
Myclass::init()
{
// this function is called before Myclass::test()
// a couple other python funcitons are called as listed below.
// PyString_FromString, PyImport_Import, PyObject_GetAttrString, PyTuple_New, PyTuple_SetItem, PyObject_CallObject, PyDict_GetItemString
}
Myclass::test()
{
char buffer[150];
char *strP1 = (char *)"sys.path.append('";
char *strP2 = (char *)"')";
strcpy (buffer, strP1);
strcat (buffer, _modDir);
strcat (buffer, strP2);
printf("Before %s\n", _modDir);
printf("Before %s\n", _modName);
PyRun_SimpleString(buffer);
printf("After %s\n", _modName);
}
Here is the output. FYI I'm using a,b,c,d,f for illustration purpose only. It almost fills like PyRun_SimpleString(buffer) stick the end of buffer into _modName.
Before /aaa/bbb/ccc/ddd
Before ffffff
After cc/ddd'
Thanks for Klamer Schutte for hinting in the correct direction.
The DECRF in my code was the culprit. Being unfamilar with how reference works, I guess. The DECREF call released pValue together with it the content pointed by _modName. A guess a more beginner quesiton would be, should I have added a Py_INCREF(pValue) after _modName assignment?
_modName = PyString_AsString (PyDict_GetItemString(pValue, (char*)"modName"));
Py_DECREF(pValue);