Python version: 3.8.1
Spyder version: 3.3.6
Qt version: 5.12.9
Wrapper: develop using PyBind11
I am wrapping a dll develop in C++ which use Qt dlls to be used with Python. I wrote the wrapper with Visual Studio 2019 using the compiler MSVC (as my dll is compiled with MSVC). After generating the solution in VS2019 I obtain a .pyd file which can be import with python.
It works good when I use python on line command:
Start cmd.exe
$python
import MyLibName
I can use the functions/classes ...
But if I try with Spyder, I get the following error:
ImportError: DLL load failed while importing PyStack: The specified module could not be found..
So here are my questions :
Is there a way to get more information about ImportError like the name of the missing dll or something?
I don't understand why the issue only happen with spyder. I tried with IPython Qt Console and it work. Does spyder use a embeded python version or something ?
I don't fully understand how dll shall be managed, I mean shall I provide dll like libGLESV2.dll with the .pyd or just give a path where to find it ?
Thank you in advance.
My guess
I think I find out which part of Qt/python is producing this issue, but I still don't know how to solve it.
My dll use signals/slots which need an event loop to be performed. If an event loop is already running the dll will try to use it, if the loop version (ex : PyQt5==5.14.1) isn’t the same as mine (ex Qt==5.15.1) import will be impossible.
Note that the reverse is true, if I run my dll an then try to start a loop with %gui qt the command will throw an error.
How to reproduce the issue :
Compile a Qt project available here.
Copy the output dll in the folder PyMyStack/dependencies of the VS Project (available here)
Compile the VS project.
Open an IPython console (without using qt has event loop)
Import the module created with VS (Import PyMyStack)
Run the magic command %gui qt
Last command shall print : ERROR:root:DLL load failed while importing QtSvg: The specified procedure could not be found.
How to hide/solve the problem:
Disclaimer : The solutions presented here are surely not the best, if you know a better one please share it ☺
If you just want to import your lib in Spyder, you can use another event loop. Here are the steps to change this:
In Spyder menus go to Tools→Preferences
Select “IPython console”
Go to “Graphics” tab and change the backend combo box to any other values than Qt or Automatic
If you want to use Qt event loop you will have to update it. You can do this with pip command, but remember than Spyder is not compatible with some version. Here is the pip command:
Pip install PyQt5==X.Y.Z
Where X and Y are the same version use to compile your Qt project. The last digit version seems to not be important.
When I import a module I built, I get this boost-python related error:
Traceback (most recent call last):
File "<string>", line 1, in <module>
ImportError: dlopen(./myMod.so, 2): Symbol not found: __ZN5boost6python7objects15function_objectERKNS1_11py_functionERKSt4pairIPKNS0_6detail7keywordES9_E
Referenced from: ./myMod.so
Expected in: flat namespace
in ./myMod.so
What does this actually mean? Why was this error raised?
Description
The problem was caused by mixing objects that compiled with libc++ and object that compiled with libstdc++.
In our case, the library myMod.so (compiled with libstdc++) need boost-python that compiled with libstdc++ (boost-python-libstdc++ from now). When boost-python is boost-python-libstdc++, it will work fine. Otherwise - on computer that its boost-python has compiled with libc++ (or another c++ library), it will have a problem loading and running it.
In our case, it happens because that libc++ developers intentionally changed the name of all of their symbols to prevent you (and save you) from mixing code from their library and code from a different one: myMod.so need a function that take an argument from the type. In libc++, this type's name is std::__1::pair. Therefore, this symbol was not found.
To understand why mixing two version of the same API is bad, consider this situation: There are two libraries: Foo and Bar. They both have a function that takes a std::string and uses it for something but they use a different c++ library. When a std::string that has been created by Foo will be passed to Bar, Bar will think that this is an instance of its c++ library's std::string and then bad things can happen (they are a completely different objects).
Note: In some cases, there would be no problem with two or more different versions of the same API in a completely different parts of a program. There will be a problem if they will pass this API's objects between them. However, checking that can be very hard, especially if they pass the API object only as a member of another object. Also, a library's initialization function can do things that should not happen twice. Another version may do these things again.
How to solve that?
You can always recompile your libraries and make them match each other.
You can link boost-python to your library as a static library. Then, it will work on almost every computer (even one that doesn't has boost-python installed). See more about that here.
Summary
myMod.so need another version of boost-python, one that compiled with a specific c++ library. Therefore, It would not work with any another version.
In my case I was receiving:
ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/xmlsec.cpython-38-darwin.so, 0x0002): symbol not found in flat namespace '_xmlSecDSigNs'
BACKGROUND:
M1 MacBook Pro with Montery
I was working with a python virtualenv (using pyenv) to use an earlier version of python3.8 (3.8.2), while my system had 3.8.10 installed natively.
While I was in the activated 3.8.2 virtualenv I noticed the path in dlopen() was pointing to the package in the native python install NOT the virtualenv install.
SOLUTION:
In my case, I did not need the native 3.8 version at all so I simply removed it and this solved the problem.
I encounter the same problem.
Expected in: flat namespace
Add the linker flag fixes the problem
-lboost_python37
change the dynamic library name to the one installed on the os.
By the way, my os is macOS High Sierra and I use brew to install boost_python3.
Symbol not found means the definition of the declared function or variable was not found. When a header file of a shared object is compiled with your program, linker adds symbols of declared functions and objects to your compiled program. When your program is loaded by the OS's loader, the symbols are resolved so that their definition will be loaded. It is only at this time where if the implementation is missing, loader complains it couldn't find the definition due to may be failing to resolve the actual path to the library or the library itself wasn't compiled with the implementation/source file where the definition of the function or object resides. There is a good article on this on the linux journal http://www.linuxjournal.com/article/6463.
In my case I was just failing to import all the required sources (c++ files) when compiling with Cython.
From the string after "Symbol not found" you can understand which library you are missing.
One of the solutions I found was to uninstall and reinstall it using the no-binary flag, which forces pip to compile the module from source instead of installing from precompiled wheel.
pip install --no-binary :all: <name-of-module>
Found this solution here
Here's what I've learned (osx):
If this is supposed to work (i.e. it works on another computer), you may be experiencing clang/gcc issues. To debug this, use otool -l on the .so file which is raising the error, or a suspect library (in my example it's a boost-python dylib file) and examine the contents. Anything in the /System/ folder is built with clang, and should be installed somewhere else with the gcc compiler. Never delete anything in the /System folder.
.so files are dynamic libraries (so = shared object). On Windows they are called .dll (dynamic-link library). They contain compiled code which contains functions available for usage to any executable which links them.
What is important to notice here is that those .so are not Python files. They were probably compiled from C or C++ code and contain public functions which can be used from Python code (see documentation on Extending Python with C or C++).
On your case, well, you have a corrupt .so. Try reinstalling the affected libraries, or Python, or both.
Problem
I had this same issue when running puma as part of Rails app
LoadError:
dlopen(/Users/alucard/.rbenv/versions/2.7.6/lib/ruby/gems/2.7.0/gems/puma-5.6.4/lib/puma/puma_http11.bundle, 0x0009): symbol not found in flat namespace '_ERR_load_crypto_strings'
/Users/alucard/.rbenv/versions/2.7.6/lib/ruby/gems/2.7.0/gems/puma-5.6.4/lib/puma/puma_http11.bundle
Solution
It was solved just by installing puma gem again gem install puma
I'm trying to integrate CUDA to an existing aplication wich uses boost::spirit.
Isolating the problem, I've found out that the following code does not copile with nvcc:
main.cu:
#include <boost/spirit/include/qi.hpp>
int main(){
exit(0);
}
Compiling with nvcc -o cudaTest main.cu I get a lot of errors that can be seen here.
But if I change the filename to main.cpp, and compile again using nvcc, it works. What is happening here and how can I fix it?
nvcc sometimes has trouble compiling complex template code such as is found in Boost, even if the code is only used in __host__ functions.
When a file's extension is .cpp, nvcc performs no parsing itself and instead forwards the code to the host compiler, which is why you observe different behavior depending on the file extension.
If possible, try to quarantine code which depends on Boost into .cpp files which needn't be parsed by nvcc.
I'd also make sure to try the nvcc which ships with the recent CUDA 4.1. nvcc's template support improves with each release.
I have the NAG Fortran compiler installed. I can compile Fortran code by calling nagfor -o helloworld helloworld.f90. If I run f2py with f2py -c -m helloworld helloworld.f90 --fcompiler=nagfor nothing happens. Additionally, if I just run f2py nothing happens. f2py --help-fcompiler gives no output.
I have Windows 7 installed and use the the Anaconda Python distribution. Any idea how I should address this problem?
Following Ian`s comments and this post I managed to run f2py (unfortunately only with the GNU Fortran compiler).
I had to change line 337 in C:\Loopy\Lib\site-packages\numpy\distutils\fcompiler\gnu.py to:
pass #raise NotImplementedError("Only MS compiler supported with gfortran on win64")
Additionally I use C:\Loopy\Scripts\f2py.py.
It's unusual that you aren't seeing any error output at all.
That makes it sound like you're calling something else.
Make sure Anaconda's scripts directory on your path and that you don't have some sort of script in your current directory called f2py.
Depending on how you have your computer is set up to interpret file types, you may need to run something like python f2py.py with the rest of the arguments the same.
If you're using Anaconda, you should already have a copy of gfortran intsalled too.
If you want to use that instead, make sure Anaconda's bin directory is on your path.
Unless you have a very recent (1.10, currently in development) version of numpy, to use gfortran, you'll need to go to Anaconda/Lib/site-packages/numpy/distutils/fcompiler/gnu.py and comment out the lines (somewhere around line 330) that raise an error if you're on 64 bit windows.
Once you've done that, it should work fine.
Edit: judging by the old f2py docs and the current source, the proper fcompiler flag is --fcompiler=nag.
The compiler is specified by vendor, not by executable name.
I have a working c++ code that I want to wrap into a python module on Windows XP and Python 2.7. I have never done this before, so I looked into swig and distutils.
I created an interface file and a setup.py and compiled using
python setup.py build_ext -c mingw32
The script creates a module_wrap.cpp from my module.i and module.cpp file, and then creates a module_wrap.o and a module.o. The creation of module.o creates a bunch of Warnings for unused variables and deprecated char*, but it seems to work. Because the C++-code is not mine, I don't really want to get into these right now.
The last step is executing
g++.exe -shared -s build\temp.win32-2.7\Release\module_wrap.o build\temp.win32-2.7\Release\module.o build\temp.win32-2.7\Release\_module.def -LC:\Programme\Python27\libs -LC:\Programme\Python27\PCbuild -lpython27 -o build\lib.win32-2.7\_module.pyd
I get
Cannot export init_module: symbol not defined
error: command 'g++' failed with exit status 1
I googled a lot to this now, and I just can not find a solution to this problem. The previously created _module.def seems to try to export this init since it contains
LIBRARY _module.pyd
EXPORTS
init_module
Obviously this doesn't work, but I have no idea why. Can anyone help me out here?
I figured it out. The problem was the (not posted) interface file module.i for swig. There I named the module %module usemodule, whereas in the setup.py i named the module name=module. This way swig created an init_function, that did not match the name the created module was expecting it. In the end: just a typo...
Thanks for your support nevertheless!