I am in the process of updating our build machine (from Windows Server 2008 to Windows 10), and we need to support both Python 2 and Python 3. We are using SWIG to generate Python bindings for our C++ code. As part of this we generate both a C++ only DLL (mylib_shared.dll, built directly using Clang) and a python-version-specific dll (_mylib.dll, built from swig).
The build is currently fine in Python 2.7, but for Python 3 when I try to import a function from _mylib.dll using the SWIG bindings, I get an error saying the shared dll is missing. This file is identical no matter which version of Python is running, and is definitely present, in the expected place and that directory is in the PATH variable (I've checked, very carefully). I have used Dependencies and dumpbin to check what it's looking for, the only difference I can see is that Python38.dll (which is obviously imported by the Python 3 version but not the Python 2 version) has a dependency on ws2_32.dll (present in C:\Windows\SysWOW64) that isn't a dependency for the equivalent Python 2 dll. That says it's missing api-ms-win-core-string-obsolete-l1-1-0.dll, but I think this is a red herring - that seems to be an implementation-dependent thing (see https://answers.microsoft.com/en-us/windows/forum/windows_10-files/missing-api-ms-win-core-dlls/d99d1368-0f92-43db-bbdb-7d080f1f96e9). I have checked that everything is being built for the correct architecture (32-bit) by loading the dlls with dumpbin.
So I'm stumped. I had a problem previously with mixed architectures causing a similar problem, but I resolved that and as I say, I'm now checking directly on the dlls being built, so I can't see it's that (also you'd expect it to be a problem for both versions of Python). The file is definitely there. I don't know what else would be causing Windows to claim it can't find the dll. Any help gratefully received!
Related
I've been trying to include Python.h into my application, but I wanted to do it statically so it can be a standalone exe that knows how to run python on its own. I've tried using the dlls and using something like iexpress to bundle the two, however, even with that method the user needs to have Python on their system path to run it, which just ruins all the convenience. Whenever I try to build python with cmake or some other methods I've seen online, there are always just very obscure errors, and fixing them just leads to more. Anyone know how to statically build python for windows, or does someone maybe already have it built?
Sorry for answering my own question, but I've seen this crop up in multiple forums, so I wanted to post this in hope it helps someone else.
The basic problem occurs when trying to run Ansible against older Python interpreters -- particularly Python 2.6 on RedHat 5 -- and getting error messages about 'libselinux bindings not available' or similar errors.
While this could happen for any Python-based application, I see it most commonly on Ansible. Ansible presumes the selinux module is available and will always attempt to import it at runtime.
The libselinux-python bindings are not a simple python module. The module must be cross-compiled against both the target version of Python and the target version of libselinux. The nice folks maintaining the RedHat 5 EPEL repositories did not generate a Python 2.6/libselinux 1.33 module.
The 'existing' libselinux-python module from the standard repos will not work, because it is specific to the supplied Python 2.4 interpreter. If you copied the module from a different Python 2.6 install -- say, a RedHat 6 system -- that won't work either, because it's built against the wrong version of libselinux. While you can amuse yourself with the various errors created by different combinations, Ansible won't bother to distinguish between them; it will just state that the bindings are unavailable.
The solution is to create a 'stub' selinux python module to pacify Ansible. Create a file
/usr/lib64/python2.6/site-packages/selinux/__init__.py
with the following contents:
def is_selinux_enabled():
return False
def is_selinux_mls_enabled():
return False
(This is Python code, so mind the indents.) This effectively disables Ansible from working with selinux. Also, tasks running against these systems should not include any selinux attributes, such as setype or seuser. (Honestly, I haven't tested it fully.) But basic modules like lineinfile or command now work properly.
This does not required disabling selinux; it only prevents Ansible from manipulating selinux attributes. If necessary, you can always use the one of the command modules to script around it.
I would have liked to post this as a comment to the #crankyeldergod's answer (as his response lead me to figure out my fix to this issues) but I don't have enough posts to comment yet.
I also kept receiving the "Aborting, target uses selinux but python bindings (libselinux-python) aren't installed!" error despite having the libselinux-python packages installed. I went into /usr/lib64 and checked the python directories I found there until I located the one with selinux files present. I made note of that version of python and declared it explicitly in my inventory file, ie - ansible_python_interpreter=/usr/bin/python3.6 in my case and that resolved the issue.
I'm currently trying to learn more about the layers API of tensorflow, for this I'm trying the cloud-ml samples (census: https://github.com/GoogleCloudPlatform/cloudml-samples/tree/master/census).
When I launch the script on my Windows computer (Windows-10, run in local, not distributed, CPU mode), I get the following error:
File "\*\Anaconda3\lib\site-packages\tensorflow\contrib\layers\python\layers\feature_column.py", line 1652, in insert_transformed_feature name="bucketize")
File "\*\Anaconda3\lib\site-packages\tensorflow\contrib\layers\python\ops\bucketization_op.py", line 48, in bucketize
return _bucketization_op.bucketize(input_tensor, boundaries, name=name)
AttributeError: 'NoneType' object has no attribute 'bucketize'
In the code of tensorflow (I used version 1.0.0, and upgraded it to 1.0.1 with the same error), I saw in the file tensorflow\contrib\layers\python\ops\bucketization_op.py that the op was loaded from native code:
_bucketization_op = loader.load_op_library(
resource_loader.get_path_to_datafile("_bucketization_op.so"))
At this point I actually have two questions:
Am I wrong to think that this is only valid on Linux, or the .dll might have been renamed .so to keep a coherent Python code? If there's such a renaming, can someone tell me where I could find this file as a quick search into the folder gave no result for *.dll or *.so (I assume every native code is wrapped by SWIG inside the _pywrap_tensorflow.pyd)?
Does anyone have a clue of why this kind of error could happen?
TL;DR: These ops should now work in the current nightly build of TensorFlow. I've sent out a pull request to add support in the upcoming 1.1 release.
The explanation is a bit tortuous, but I'll attempt to lay out the key points.
In general, the tf.contrib libraries have limited support on Windows, often because they depend on platform-specific code that does not work (or has not historically worked) on Windows. Until very recently the tf.load_op_library() API did not work on Windows, but a recent pull request added support for it. Nightly builds for TensorFlow on Windows now include .dll files for some extension libraries, and the loader library includes code that converts the .so extension to .dll on Windows.
As a historical workaround for this problem, we statically linked every tf.contrib kernel into _pywrap_tensorflow.pyd, and made loader.load_op_library() fail silently if the extension was not present on Windows. However, there are two ways to get the generated Python wrapper functions for each op:
The more common way, which (e.g.) tf.contrib.tensor_forest uses is to generate the Python source at build time and include the generated code in the PIP package. This works fine on Windows.
The less common way, which bucketization_op.py uses is to generate the Python source at run time, and return a generated Python module from loader.load_op_library(). Since we made this fail silently and return None on Windows, calling _bucketization_op.bucketize() doesn't work.
Finally, due to operational concerns, we determined that it would be useful to switch between the static and dynamic linking of the tf.contrib kernels on all platforms, and the easiest way to do that would be to generate the wrapper code statically. A recent change (which alas just missed the branch for the 1.1 release) made the generation of wrapper code consistent across all of the tf.contrib libraries.
I hope this makes sense. As a result of all of these changes, if you upgrade to a nightly build of TensorFlow the problem should be fixed, and
hopefully we can merge the change into the 1.1 release as well!
I am trying to write my own script to create partitions. (Even though this can be done in anaconda, I want my custom script) The script creates lvm based partitions using lvm2py module. lvm2py requires liblvm2app library. which I installed in my squashfs.
When my script runs at installation, it is failing saying "LVM Library not found". This is error reported by lvm2py when find_library("lvm2app") fails.
Even though liblvm2app.so is present in /usr/lib64/ and all the other libraries dependant on liblvm2app.so are showing resolved in ldd.
Also note that sample python script that does find_library("c") also fails. Looks like python is not able to detect any of the shared libraries.
I also tried adding /usr/lib64 to LD_LIBRARY_PATH, but no luck.
Python is compiled with libpython support.
I wanted to use OpenCV with Python, so I downloaded OpenCV for Windows and got a folder of ~3.7GB after decompression. What surprised me was that the only file I needed was cv2.pyd, which was so small (~11MB) comparing to the C builds (~674MB). I simply copied it to my Python lib-packages folder without adding anything to my PATH and it worked perfectly.
I don't know how Python binding works, and I thought it should call C/C++ implementations under the hood. However, cv2 did not seem to require any C/C++ library. It just looks like magic to me.
Most likely it has something to do with static linking and using all possible tricks found in "Reducing Executable Size" or "GCC x86 code size optimizations"
OpenCV uses cmake as build system, which provides "MinSizeRel" build type. It seems to auto-apply most of those tricks. Couldn't find any good documentation on that, hence: [citation needed]
(follows my original answer which didn't quite address the actual question)
More convenient way to get opencv for python may be to download it from: http://www.lfd.uci.edu/~gohlke/pythonlibs/#opencv
After running installer you'll find cv2.pyd in c:\python27\lib\site-packages
As far as we are concerned .pyd file is same as .dll: http://docs.python.org/2/faq/windows.html#is-a-pyd-file-the-same-as-a-dll
Which means that we can use Dependency Walker to look into it. This is what we see:
This picture means that cv2.pyd is dynamically linked against opencv libraries which contain actual functionality. These take around ~45MB of disk space.