DLL import fails after installing unrelated library - python

I'm currently having interop problems between a proprietry library and pyembree.
Problem
I have two mostly identical conda enviroments. Both contain this in-house library. The only difference is that I installed pyembree via these instructions in one environment. Now in this environment, the in-house library cannot load a DLL which it has no problem loading in the other. The absolute (correct) path is even supplied to the ctypes.WinDLL object:
On module level (in a module of the proprietatry library):
kernel32 = ctypes.WinDLL('kernel32', use_last_error=True)
kernel32.AddDllDirectory(os.path.abspath(os.path.dirname(__file__)))
On import this is called (where name is the correct absolute path and mode is 4096):
handle = kernel32.LoadLibraryExW(name, None, mode)
Which raises a ctypes.WinError with the message 'OSError: [WinError 127] The specified procedure could not be found.'.
Tracking overwritten DLLs
I tried working with that error message based on https://stackoverflow.com/a/2603386/5409315 . That answer says that the error message means that a DLL is loaded, but in the wrong version. Initially, that made a lot of sense - the install of pyembree could have overwritten a dependency of the proprietry lib. (Although I don't understand why that message would be triggered on load as compared to when the procedure is supposed to be used.)
However, what I found with Process Monitor was that in both environments two dozen DLLs are searched but not found (e.g. OPENGL32.dll) and that the only DLL that is successfully loaded in both environments (vcomp140.dll) does not differ between the two environments.
I cross-checked this with Dependency Walker, where I found a lot more depended DLLs but had the same ultimate result: In both environments, the depended libraries are either literally identical or bytewise identical.
The PATH environment variable is also equivalent (that is, where a path contains the environment directory, it differs in exactly that regard between the environments).
Minimal reproducible example
I can't provide a minimally reproducible example because the proprietary library is not publicly available.
Another approach
I'd like to dump all global variables just before the attempt to load the DLL in both environments, diff the output and thus find the crucial difference. Is this possible in VS Code? Is this possible in some other context (pdb, ...)?

Related

Python can't locate .so shared library with ctypes.CDLL - Windows

I am trying to run a C function in Python. I followed examples online, and compiled the C source file into a .so shared library, and tried to pass it into the ctypes CDLL() initializer function.
import ctypes
cFile = ctypes.CDLL("libchess.so")
At this point python crashes with the message:
Could not find module 'C:\Users\user\PycharmProjects\project\libchess.so' (or one of its dependencies). Try using the full path with constructor syntax.
libchess.so is in the same directory as this Python file, so I don't see why there would be an issue finding it.
I read some stuff about how shared libraries might be hidden from later versions of python, but the suggested solutions I tried did not work. Most solutions were also referring to fixes involving linux system environment variables, but I'm on Windows.
Things I've tried that have not worked:
changing "libchess.so" to "./libchess.so" or the full path
using cdll.LoadLibrary() instead of CDLL() (apparently both do the same thing)
adding the parent directory to system PATH variable
putting os.add_dll_directory(os.getcwd()) in the code before trying to load the file
Any more suggestions are appreciated.
Solved:
Detailed explanation here: https://stackoverflow.com/a/64472088/16044321
The issue is specific to how Python performs a DLL/SO search on Windows. While the ctypes docs do not specify this, the CDLL() function requires the optional argument winmode=0 to work correctly on Windows when loading a .dll or .so. This issue is also specific to Python versions greater than 3.8.
Thus, simply changing the 2nd line to cFile = ctypes.CDLL("libchess.so", winmode=0) works as expected.

ImportError: DLL load failed while importing ON WINDOWS

I fixed a super-annoying case of "ImportError: DLL load failed while importing" in a way that generally applies to Windows, so let me share it with the group. Here is the question:
I installed FINUFFT via pip install finufft. When I import finufft, I get this error:
ImportError: DLL load failed while importing _finufft: The specified module could not be found.
How do I fix it?
Read to the end before doing anything.
The error means that a DLL cannot find another DLL that it was linked with. But which other DLL?
Download Dependencies.
Locate your problematic DLL. In this specific case: Locate the folder ...\Lib\site-packages\finufft\ of the FINUFFT installation that you want to fix. ...\ is the path of your standard python installation or of your python virtual environment.
Start DependenciesGui.exe and use it to open the problematic DLL, e.g. ...\finufft\_finufft.cp38-win_amd64.pyd. (A .pyd is a regular DLL with some specific entry points for python.)
On the left, you will see a complete list of the problematic DLL's direct dependencies, whose dependencies you can in turn unfold by mouse click. Apart from typical Windows-DLLs, like kernel32.dll and MSVCRT.dll, and apart from the FFTW-DLLs, which should already be in the FINUFFT-folder, there will also be some - possibly missing - Linux-DLLs. For me, it was libgcc_s_seh-1.dll, libgomp-1.dll and libstdc++-6.dll. By checking their direct dependencies, I also discovered libwinpthread-1.dll as missing.
[See EDIT below!!!] I found those DLLs in Anaconda (...\Anaconda3\Library\mingw-w64\bin\), but you can probably also get them from cygwin (...\cygwin64\bin\), git (...\Git\mingw64\bin\) or anything else that downloads mingw64 and its packages on Windows.
To solve the problem, copy the respective DLLs into ...\Lib\site-packages\finufft\ and give them the exact filenames that the FINUFFT-DLL is expecting according to Dependencies. This works because Windows and because of the Windows DLL search order.
Now, import finufft should work in the specific python environment whose FINUFFT installation you fixed. Clearly, this method can be applied anytime DLL dependencies are missing.
EDIT - correction of my answer by #CristiFati: If possible, DLLs and similar things should always be built with the same toolchain. So if you don't compile them yourself, get them from as few different places as possible, i.e. don't mix regular python, Anaconda, cygwin, etc. - if possible. Of course, Windows DLLs will have a different origin from Linux DLLs.

Using embedded python, loading a *.pyd outside of DLLs directory fails

I have a C++ application (X-Plane) for which there is a plugin which permits the use of python scripts (XPPython3 plugin). The plugin is written in C, using python CAPI, and works great, allowing me to write python scripts which get executed within the C++ application.
On Windows 10, I want to extend my python features by importing imgui. I have a python cython-built pyd file (_imgui.cp39-win_amd64.pyd).
If I place the pyd file in C\Program Files\Python39\DLLs, it works as expected: C++ application calls CAPI to python, which loads script which imports and executes imgui code.
If I place the pyd file anywhere else, embedded python either reports "module not found" -- if the pyd isn't on sys.path(), or if it is on sys.path():
ImportError: DLL load failed while importing _imgui: The parameter is incorrect.'
Changes using: os.add_dll_directory(r'D:\somewhere\else')
Does not effect whether the module is found or not, nor does it change the 'parameter incorrect' error. (see https://bugs.python.org/issue36085 for details on this change. -- my guess is add_dll_directory changes lookup for DLLs, but not for pyd?) sys.path appears to be used for locating pyd.
Yes, the pyd is compiled with python3.9: I've compiled it both with mingw and with visual studio toolchains, in case that might be a difference.
For fun, I moved python-standard _zoneinfo.pyd from Python39\DLLs and it fails in the same way in embedded python: "The parameter is incorrect". So, that would appear to rule out my specific pyd file.
The key question is/are:
Other than placing a pyd file under PythonXX\DLLs, is there a way to load a PYD in an embedded python implementation? (I want to avoid having to tell users to move my pyd file into the Python39\DLLs directory... because they'll forget.)
Note that using IDLE or python.exe, I can load pyds without error -- anywhere on sys.path -- so they don't have to be under Python39\DLLs. It's only when trying to load from embedded python that the "Parameter is incorrect" appears. And of course, this works flawlessly on Mac.
(Bonus question: what parameter? It appears to be python passing through a windows error.)
There seems to be a simple answer, though I suspect it's better characterized as a python bug.
There is nothing magical about Python39\DLLs directory.
The problem is using absolute vs relative paths in sys.path.
Python can find modules using absolute or relative paths. So if zippy.py is in folder foobar,
sys.path.append('foobar')
import zippy
# Success
Python and find, BUT NOT LOAD pyd files using relative paths. For example, move _zoneinfo.pyd from PythonXX\LDDs to foobar
sys.path.append('foobar')
import _zoneinfo
# ImportError: DLL load failed while importing _zoneinfo: The parameter is incorrect.'
Instead, use absolute path, and it will find and load PYD:
sys.path.append(r'c:\MyTest\foobar')
import _zoneinfo
# Success
So, there is actually a way to do this—that is, ship your application with the desired libraries. The solution is to use an embedded distribution and ship this with your application. You can find the correct distribution on the official Python download page corresponding to your desired version (here's the link to the lastest 3.9 release which seems to be what you're using: https://www.python.org/downloads/release/python-392/). Look for the Windows Embeddable Package.
You can then simply drop in your .pyd file alongside the standard library files (note that if your third-party library is dependent on any other libraries, you will have to include them, as well). Shipping your application with an embeddable distribution should not only solve your current issue, but will also mean that your application will work regardless of which version of Python a user has installed (or without having Python installed at all).

Why does python throw an undefined symbol error when importing a shared object from an alternate path?

I created a python extension using Boost::Python. To make it easier to use the extension on different target machines, I have included the libboost_python36.so.1.75.0 library in the same directory as the generated extension (pyshmringbuffer.so).
I checked out pyshmringbuffer.so and libboost_python36.so.1.75.0 onto a machine other than it was compiled in the directory : /path/to/pyshmringbuffer
After setting LD_LIBRARY_PATH to: /path/to/pyshmringbuffer and changing to this directory, I am able to run python3.6 and import the shared object just fine.
The problem comes when I try to run python from an alternate directory. From any other directory, I append the python path as follows:
import sys
sys.path.append("/path/to/pyshmringbuffer")
Then, when I try to import pyshmringbuffer, I get the following undefined symbol:
ImportError: /path/to/pyshmringbuffer/pyshmringbuffer.so: undefined symbol: _ZNK5boost6python7objects21py_function_impl_base9max_arityEv
I was under the impression that all symbols are self contained within the shared object. Why does it matter where I import the shared library from?
The symbol in your error message is an internal one, generated by one of the build tools. Having one undefined suggests that one of your components was built with an incompatible tool version, or that a *.so file (shared object) is out of date in some other fashion.
The simplest way to fix this is usually to rebuild your product components from scratch, in the proper order.
I was able to resolve my issue by prepending /path/to/pyshmringbuffer to my python path using:
sys.path.insert(0,"/path/to/pyshmringbuffer")
I can't say for sure, but as #PRUNE pointed out, there is something in my python path that python is seeing before it sees the intended library.
Coincidentally, I DO have a build of libboost_python36.so.1.75.0 located elsewhere on the target machine. The path for this doesn't appear on my PYTHONPATH or LD_LIBRARY_PATH, so I wouldn't EXPECT it to have been interfering, but I can't be positive.

How to make pycharm use a different cuda toolkit

I want to run an MXNet module in GPU.
I have a system which has Ubuntu 18.04 along Cuda 10.0 installed. Apparently this is not covered yet by MXNet binary files so I was focusing on installing 2 cuda versions in my pc (see also here).
Anyway I now have 2 cuda toolkits in my pc in different folders. I need a way to direct my system to use Cuda 9.2 when run from PyCharm. The funny thing is that from a typical console I can run it just fine (at least the MXNet loading part that is of course).
In the module I want to run the program is stuck in:
import mxnet as mx
which leads to base.py in MXNet:
def _load_lib():
"""Load library by searching possible path."""
lib_path = libinfo.find_lib_path()
lib = ctypes.CDLL(lib_path[0], ctypes.RTLD_LOCAL) # <- This is where is throws the error.
# DMatrix functions
lib.MXGetLastError.restype = ctypes.c_char_p
return lib
the strange thing is that lib_path[0] just points to the location of libmxnet.so (which is correct by the way) and suddenly it throws an error:
OSError: libcudart.so.9.2: cannot open shared object file: No such
file or directory
Even if I follow the error trace the last command is this:
self._handle = _dlopen(self._name, mode)
with self._name being the same location of libmxnet.so.
I have tried to make it work by changing the system variable with
os.environ["LD_LIBRARY_PATH"] = "/usr/local/cuda-9.2/lib64"
as the second line of the module (the 1st is of course import os!) but this does not seem to work. Apparently it's taken into consideration.
So, how can I bypass this?
Any solution would be acceptable being on the MXNet side or pyCharm side.
Well, to make this available to anyone facing the same problem I will post my solution.
I managed to make it work by defining the environmental variable inside pycharm from the run configuration menu (the one that it's available from Run->Run... or Alt+Shift+F10) and defining it there as environmental variable.
LD_LIBRARY_PATH: /usr/local/cuda-9.2/lib64
I am not sure why for that case pycharm is working fine while when the same variable is defined inside the code it does not though (any explanation welcome).

Categories