How I resolve Cppyy load_library giving Runtime error? - python

Okay so according to the answer I found here to the question titled "Calling C/C++ from Python?" here, and also on the cppyy documentation website, I made some sample classes in .h and .cpp files and tried to include them in Python. While the .h file gets included easily, when I try to use the cppyy.load_library() function, it gives me a runtime error for some reason. Can someone please help? I've tried to look for solutions online but apparently no one has got a similar problem before. This is what I'm running in Jupyter Notebook:
import cppyy
cppyy.include("foo.h")
cppyy.load_library("libfoo")
the final line is giving me the following error:
RuntimeError Traceback (most recent call last)
<ipython-input-3-eea6173ad08e> in <module>
----> 1 cppyy.load_library("libfoo")
~\anaconda3\lib\site-packages\cppyy\__init__.py in load_library(name)
219 sc = gSystem.Load(name)
220 if sc == -1:
--> 221 raise RuntimeError('Unable to load library "%s"%s' % (name, err.err))
222 return True
223
RuntimeError: Unable to load library "libfoo"
This is my .h file:
class Foo {
public:
void bar();
};
And here is my .cpp file:
#include "foo.h"
#include <iostream>
void Foo::bar() { std::cout << "Hello" << std::endl; }
I'm using the commands g++ -c -fPIC foo.cpp -o foo.o and g++ -shared -Wl,-soname,libfoo.so -o libfoo.so foo.o to compile my cpp code.
Please can someone help?

The example works just fine for me.
To debug, first make sure to start python in the same directory where libfoo.so is located, or to add the directory where libfoo.so lives to LD_LIBRARY_PATH (for any process to use), or call cppyy.add_library_path() with the path as argument to add it for use by cppyy only. As concerns the name, the .so extension is automatically added as needed, as is the lib part, so either one of foo, libfoo, foo.so, or libfoo.so is fine.
If that still fails, a reasonable way on Linux (only) of getting more information for what could be going wrong, is to use ctypes:
$ python
>>> import cypes
>>> lib = ctypes.CDLL("libfoo.so")
which will show you if there are different problems such as missing symbols or missing dependent libraries (but neither is the case here).

Related

Can't run Python file in C

I have a problem with python api in c. I am trying to run a python script with PyRun_SimpleFile but it fails
I get this error: d:/gcc/bin/../lib/gcc/x86_64-w64-mingw32/11.1.0/../../../../x86_64-w64-mingw32/bin/ld.exe: C:\Users\aggel\AppData\Local\Temp\ccRzYgwa.o:pyboot.c:(.text+0x47): undefined reference to __imp_PyRun_SimpleFileExFlags' collect2.exe: error: ld returned 1 exit status
The code:
define PY_SSIZE_T_CLEAN
#include <stdio.h>
#include <conio.h>
#include "Python.h"
#include "fileapi.h"
#include "fileobject.h"
int main(){
PyObject* pInt;
FILE *file = fopen( "test.py", "r+" );
PyRun_SimpleFile(file, "test.py");
return 0;
}
undefined reference to __imp_PyRun_SimpleFileExFlags
This means that the function PyRun_SimpleFileExFlags is declared (possibly in file Python.h), but not defined. The definition exists in the compiled python library. The name of the library can be something like libpython.so (for dynamic libraries) or libpython.a (for static libraries).
You need to link your program with the python library.
In gcc, you can use the -l flag. Something like
gcc -lpython3 prog.c
The name "python3" may vary, if the library name starts with "lib" you need not write lib in -l flag.
However, you might need to pass the location of the library explicitly if this does not work. You can pass the location with the -L flag.
gcc -lpython3 -L/location/to/libpython.a prog.c
Only after proper linking will you be able to use the functionalities of Python API.
SOLVED (at least for me) Add
-Wl,path to your python installation\Python\Pythonversionlibs\pythonversion.lib
E.G.
-Wl,C:\Users\Developer\AppData\Local\Programs\Python\Python310\libs
python310.lib
That comma after -Wl is really important

Swig/python : when SWIG_init() is needed?

Hi everyone and thanks for trying to help me !
I encounter trouble when trying to import a python module generated by swig.
I have a basic library "example" containing few methods.
Next to it I have a main program dynamically linked to python.
This program imports the generated module and calls a function in it.
If my library example is a shared one, named _example.so, everything works perfectly, and I can import it in python.
But if my library is static, _example.a, and linked to the main program, then I will have the error "no module named _example was found" unless I add a call to SWIG_init() in the main function.
What exactly does SWIG_init() , and when should I use it ? It seems quite weird to me because it's never said in the documentation to do such a call.
I know that dealing with a .so shared library is better but I try to reproduce the behavior of what I have on a big project at work, so I really have to understand what happens when the module is static.
Here is my main file :
#include "Python.h"
#include <iostream>
#if PY_VERSION_HEX >= 0x03000000
# define SWIG_init PyInit__example
#else
# define SWIG_init init_example
#endif
#ifdef __cplusplus
extern "C"
#endif
#if PY_VERSION_HEX >= 0x03000000
PyObject*
#else
void
#endif
SWIG_init(void);
int main (int arc, char** argv)
{
Py_Initialize();
SWIG_init(); // needed only using the statically linked version of example ?
PyRun_SimpleString("print \"Hello world from Python !\"");
PyRun_SimpleString("import sys");
PyRun_SimpleString("sys.path.append(\"/path/to/my/module\")");
PyRun_SimpleString("import example");
PyRun_SimpleString("a = example.Example()");
PyRun_SimpleString("print a.fact(5)");
}
Here is how things are generated :
swig -c++ -python example.i
g++ -fpic -c example.cpp example_wrap.cxx -I/include/python2.7 -lstdc++
ar rvs libexample.a example.o example_wrap.o
// to generate dynamic instead of static : g++ -shared example.o example_wrap.o -o _example.so
g++ main.cpp -I/include/python2.7 libexample.a -lstdc++ -L/lib/python -lpython2.7 -o main
What you are calling is the init function of the native python module _example that is loaded by the SWIG generated python wrapper. For python 2 this function is named init_example, and for python 3 it is named PyInit__example.
Every python extension with C or C++ needs such a function, it basically initializes everything and registers the name of the module and all the methods available for it. In your case SWIG has generated this function for you.
The reason you have to call this function yourself when you compiled the library statically is simply that the python wrapper example imports the native module _example which is by the python convention a shared object, which you did not compile, and which is thus not found.
By calling SWIG_init, you "preload" the module, so python does not try to reimport it, so it works even though there is no shared object anywhere on the python module path.
If you have the shared object for your module, python will call this function for you after loading the shared object and you don't have to worry about this.

python boost scope cause NoneType error

I've compiled a third-party python module (alembic), alembic import another python module (imath) by PyImport_ImportModule, imath import another python module (iex) again by PyImport_ImportModule. The codes like:
BOOST_PYTHON_MODULE(alembic)
{
handle<> imath(PyImport_ImportModule("imath"));
}
BOOST_PYTHON_MODULE(imath)
{
handle<> iex(PyImport_ImportModule("iex"));
}
BOOST_PYTHON_MODULE(iex)
{
scope().attr("BaseExc") = "An Exception";
}
It works if I import imath first, then import alembic. But if I import alembic directly, it will raise a NoneType error at scope().attr("BaseExc") = "An Exception". I've read the boost code, and I'm sure the reason is detail::current_scope is empty, but I don't know why.
Can any one help me about it? Why it was happened and how can I avoid it?
Add:
I can't reproduce it by the code above. I wrote a cpp file and fill it by these codes:
#include "boost/python.hpp"
using namespace boost::python;
BOOST_PYTHON_MODULE(alembic)
{
handle<> imath(PyImport_ImportModule("imath"));
}
Then use this option to compile it:
g++ -fPIC -shared -I/usr/include -L/usr/lib -lboost_python -lpython2.7 -L/usr/lib64 -Wl,-soname,alembicmodule.so -o alembicmodule.so alembic.cpp
And it works fine. I'm surprised because in the third-party module, the error happened at the first line. Maybe this is not a boost bug, it is a cmake bug?
This issue comes from compiling options. Cmake generates a link script, and this script use libboost_python.a to link the python module alembicmodule.so. When I change libboost_python.a to libboost_python.so, this issue was fixed.

Python C Extension using numpy and gdal giving undefined symbol at runtime

I'm writing a C++ extension for python to speed up a raster image viewer created in-house.
I've got working code, but noticed that the speed hadn't increased that much and after profiling a bit deeper realised that it was due to the gdal.ReadAsArray calls, which were python callbacks from the C extension.
To get around the overhead of Python C-API when calling python objects I decided I would use the C++ libraries for gdal rather than using the Python callback to the existing gdalDataset. (space isn't a problem).
However after implementing the code for this I ran into an error at runtime(the extension compiled fine)
which was
import getRasterImage_new
ImportError: /local1/data/scratch/lib/python2.7/site-packages
/getRasterImage_new.so: undefined symbol: _ZN11GDALDataset14GetRasterYSizeEv
The code below replicates the error(some edits may be needed to run on your machines(ignore the uninitialised variables, it's just whats needed to replicate the error).
Python:
#!/usr/bin/env python
import numpy
from osgeo import gdal
import PythonCTest
print("test starting")
PythonCTest.testFunction(1)
print("test complete")
C++:
#include "Python.h"
#include "numpy/arrayobject.h"
#include "gdal_priv.h"
#include <iostream>
extern "C"{
static PyObject* PythonCTest_testFunction(PyObject* args);
static PyMethodDef PythonCTest_newMethods[] = {
{"testFunction", (PyCFunction)PythonCTest_testFunction, METH_VARARGS,
"test function"},
{NULL,NULL,0,NULL}};
PyMODINIT_FUNC initPythonCTest(void){
(void)Py_InitModule("PythonCTest",PythonCTest_newMethods);
import_array();
}
}
GDALDataset* y;
static PyObject* PythonCTest_testFunction(PyObject* args){
std::cout << "in testFunction\n";
y->GetRasterYSize();
std::cout << "doing stuff" << "\n";
return Py_None;
}
Any suggestions would be very welcome.
EDIT
You can also remove the from osgeo import gdal and the error stull occurs(only just noticed that).
EDIT 2
I forgot to say that I'm compiling my extension using distutils
current setup.py is
#!/usr/bin/env python
from distutils.core import setup, Extension
import os
os.environ["CC"] = "g++"
setup(name='PythonCTest', version='1.0', \
ext_modules=[Extension('PythonCTest', ['PythonCTest.cpp'],
extra_compile_args=['--std=c++14','-l/usr/include/gdal', '-I/usr/include/gdal'])])
A Python extension module is a dynamically loadable (shared) library. When linking a shared library, you need to specify its library dependencies, such as -lgdal and, for that matter, -lpython2.7. Failing to do so results in a library with unresolved symbols, and if those symbols are not provided by the time it is loaded, the loading will fail, as reported by Python.
To resolve the error, you need to add libraries=['gdal'] to Extension constructor. Specifying -lgdal in extra_compile_args won't work because compile arguments, as the name implies, are used for compilation and not for linking.
Note that an unresolved symbol would not go undetected when linking an executable, where the build would fail with a linker error. To get the same diagnostics when linking shared libraries, include -Wl,-zdefs in link arguments.

Loading DLL from embedded python code in c

The crux of my problem is this:
I am developing code on Windows XP in C with MS Visual Studio 10.0, and I need to embed Python to do some plotting, file management, and some other things. I had problems with sys.path finding my Pure-Python modules, but I have fixed the problem by modifying PYTHONPATH.
Now, my problem is getting python to find dynamic libraries that are pulled in by some modules. In particular, my problem is to compress a folder into a bzip2 achive of the same name.
From a normal python command prompt, this works just fine:
import tarfile
tar=tarfile.open('Code.tar.bz2','w:bz2')
tar.add('Code',arcname='Code')
tar.close()
But when I call this code from my c-code, it gives me this error:
Traceback (most recent call last):
File "<string>", line 4, in <module>
File "D:\My_Documents\Code\ScrollModel\trunk\PythonCode.py", line 20, in Colle
ctFiles
tar=tarfile.open(os.path.join(runPath,'CODE.tar.bz2'),'w:bz2')
File "c:\Python26\lib\tarfile.py", line 1671, in open
return func(name, filemode, fileobj, **kwargs)
File "c:\Python26\lib\tarfile.py", line 1737, in bz2open
raise CompressionError("bz2 module is not available")
tarfile.CompressionError: bz2 module is not available
I have a suspicion the problem is similar to what is described at section 5.6 of Embedded Python, but it is a bit hard to tell. For what its worth, if I do
Py_Initialize();
PyRun_SimpleString("import ssl\n");
Py_Finalize();
it doesn't work either and I get an ImportError.
Anyone had any problems like this? Am I missing something critical?
Try this, it works on my machine.
Create a simple Windows console application in Visual Studio 2010 (remove precompiled headers option in the wizard). Replace the generated code with this one :
#include <Python.h>
int main(int argc, char *argv[]) {
Py_Initialize();
PyRun_SimpleString("import ssl \n"
"for f in dir(ssl):\n"
" print f \n" );
Py_Finalize();
return 0;
}
With PYTHONHOME set to something like c:\Python...
add C:\Python\Include to the include path
add C:\Python\Libs to the library path
add python26.lib to the linker input (adjust with your Python version)
Build. Run from anywhere and you should see a listing of the content of the ssl module.
I also tried with Mingw. Same file, build with this command line :
gcc -Wall -o test.exe embeed.c -I%PYTHONHOME%\Include -L%PYTHONHOME%\libs -lpython26
Hey, I have asked a similar question, my operation system is Linux.
When i compile c file, option $(python-config --cflags --ldflags) should be added, as
gcc test.c $(python-config --cflags --ldflags) -o test
I think in Windows you may also check python-config option, Hope this helps!
I had a similar problem with Boost C++ DLL. Any external DLL need to be in the DLL search path.
In my experience, PYTHONPATH affects Python module (the import statement in Python will end up a LoadLibrary call), and build options have nothing to do with it.
When you load a DLL, Windows doesn't care what the process is. In other words, Python follows the same DLL loading rules as Notepad. You can confirm that you are facing a Windows path problem by copying any missing DLL in the same directory as your python extension, or to a directory in your path.
To find what DLL are required by any other executable or DLL, simply open the DLL or EXE file with DependencyWalker. There is also a "Profile" menu which will allow you to run your application and watch it search and load DLLs.

Categories