Python Builtin Module _socket [duplicate] - python

I'm looking through python built in library modules, and for example in socket.py I see the line:
import _socket
I understand that the socket module acts as a wrapper for _socket. I want to read through some of the source code files within _socket to see how certain tasks are accomplished.
Where can I find _socket or any of these other shared files on a Linux box?

_socket is a C extension. The socket.py module wraps this with some additional information that doesn't need the speed boost or access to OS-level C APIs.
If you are versed in C, you can read the socketmodule.c source code.
There is no one-on-one mapping between the final .so or .dll file and the original source file however. You can grep the setup.py file for the names instead:
exts.append( Extension('_socket', ['socketmodule.c'],
depends = ['socketmodule.h']) )
Take into account however that some modules are built-in, compiled as part of the python binary; these are all listed in the sys.builtin_module_names tuple.

You can use the __file__ attribute:
In [11]: _socket.__file__
Out[11]: '/Users/andy/.miniconda3/lib/python3.5/lib-dynload/_socket.cpython-35m-darwin.so'
In python packages you can also use the __path__ attribute (for the directory):
In [12]: yapf.__file__
Out[12]: '/Users/andy/.miniconda3/lib/python3.5/site-packages/yapf/__init__.py'
In [13]: yapf.__path__
Out[13]: ['/Users/andy/.miniconda3/lib/python3.5/site-packages/yapf']

Related

Why does calling my FORTRAN DLL in python require the python entry point to be in the DLL's parent directory?

I have a Fortran library compiled with gfortran into a DLL.
I am trying to call this DLL from python 3.9, and that can work, I can access the expected functions etc so this issue has nothing to do with ctypes etc.
Additionally, I know that the DLL works and can be called from python, another project uses the same DLL but has a flat folder structure (possibly to allow for this issue). However, I need it to be in a python package.
This main DLL has a few dependencies that need to be shipped with it. These dependencies must be in the parent directory of the main DLL (why I have no idea but that is the only way it works).
The issue occurs when trying to use this DLL in my python package.
If the entry point of the python code that calls the DLL is in the parent directory of the DLL then I can access the expected functions, if it is anywhere else I get the following error:
FileNotFoundError: Could not find module 'I:\<full-path>\wrappers\lib\foo.dll' (or one of its dependencies). Try using the full path with constructor syntax.
On the line:
self._libHandle = LoadLibrary(str(self._dll_path))
self._dll_path is a Path object with the absolute path to the DLL, I check the file exists before passing it to LoadLibrary.
I have the following directory structure (additional files removed for brevity):
src
|---entry.py
|---wrappers
| |---lib
| | |---foo.dll
| |---dep1.dll
| |---dep2.dll
| |---foo-wrapper.py
| |---adj-entry.py
If I add some test code to the bottom of foo-wrapper.py then I can access my DLL, if I import foo-wrapper into entry.py, I get the error above. Using the same code from entry.py in adj-entry.py works absolutely fine. The test code is shown below.
from src.wrappers import Foo
from pathlib import Path
dll_path = Path("../../src/wrappers/lib/foo.dll").resolve() # This path is the only thing adjusted between entry.py and adj-entry.py. Remove 1 ../ for entry.py
assert dll_path.exists()
assert dll_path.is_file()
f = Foo(dll_path)
What seems to be the only thing that changes is the file that python.exe is actually called with. When the file python is called with is in the DLL's parent directory everything works, if it is anywhere else I get the dependency error.
Does anyone know how I can call this DLL from anywhere?
Could this be related to the gfortran build or the Fortran code itself?

Python Import cannot find shared object file (C++ extension)

I'm creating a C++ extension module for Python3. Compilation of the setup.py file compiles just fine, but when I go to import my new module, I get
ImportError: libMyLib.so: cannot open shared object file: No such file or directory
this is the path to my .so:
/path/to/lib-cc7/libMyLib.so
I've tried to import my libraries in the setup.py in different ways, I have tried setting and re-setting my LD_LIBRARY_PATH variable in the terminal as well as in my .bash_profile. I have tried setting the paths in sys.path.
When I run this code before the import statement:
print(os.environ.get("LD_LIBRARY_PATH"))
print(os.environ.get("PYTHONPATH"))
I get the path to the correct library directory.
When I run strace the path to other .so's that I need show up, and I see it searching for libMyLib.so, but it just searches in what seems like all of the other directories and /path/to/lib-cc7/. In other library searches it checks /path/to/lib-cc7/.
I have sanity checked that the library is there about 5 times.
It seems like no matter what I do,
import MyModule.MySubModule as SubModule
always returns the same import error. Is there anything else that I haven't tried? Why does it seem like Python is looking in the wrong place for my .so?
EDIT 1:
This is what my setup.py (in essence) looks like:
Submodule1 = Extension('Module.Submodule1', sources=['Module/Submodule1/submod1.cpp'], language = C++, libraries=[..long list of libraries..],)
Submodule2 = Extension('Module.Submodule2', sources=['Module/Submodule2/submod2.cpp'], language = C++, libraries=[..long list of libraries..],)
setup(name = "Module", version = '1.0',
packages = ['Module', 'Module.Submodule1', 'Module.Submodule2'],
ext_modules = [Submodule1, Submodule2], )

Safely Unload / Reload C++ dll in Python 2.7 (Replace dll with a new version)

I ask this question in the context of most other questions being 5-10 years old on this topic and given the following
Windows 7 OS
Several versions of a .pyd compiled dll at different file path locations e.g.
/ver1.0/lib/my_dll.pyd
/ver1.1/lib/my_dll.pyd
my_dll.pyd is imported from a secondary file as from my_dll import *, I am not in control of the method or at liberty to alter
I can successfully switch between two .pyd versions as follows
import sys
sys.path.append(r'\path\to\ver_1.0')
import my_dll # imports ver 1.0
print my_dll.release_version()
sys.path.pop(-1)
del sys.modules['my_dll'] # remove module dict ref
from IPython import get_ipython # just in case you run in iPython
get_ipython().magic('reset -sf') # need to remove history!
del my_dll # delete the import object
import _ctypes # release the library handle
import ctypes
dll = ctypes.CDLL('my_dll.pyd')
_ctypes.FreeLibrary(dll._handle) # for some reason it needs to be
_ctypes.FreeLibrary(dll._handle) # done twice.
sys.path.append(r'\path\to\ver_1.1')
import my_dll
print my_dll.release_version()
The failures of reload are overcome by reloading the C++ .pyd as per the following output
Version 1.0
Version 1.1
my issue is that then I am left with an extremely flimbsy python ecosystem that will crash at almost any new function call.
Is there a more updated methodology of doing this?
References
ctypes unload dll
How can I unload a DLL using ctypes in Python?
forcing ctypes.cdll.LoadLibrary() to reload library from file
Python runtime: recompiling and reusing C library
https://bugs.python.org/issue14597

Compiling an optional cython extension only when possible in setup.py

I have a python module fully implemented in python. (For portability reasons.)
The implementation of a small part has been duplicated in a cython module. To improve perfomance where possible.
I know how to install the .c modules created by cython with distutils. However if a machine has no compiler installed, I suspect the setup will fail even though the module is still usable in pure python mode.
Is there a way to compile the .c module if possible but fail gracefully and install without it if compiling is not possible?
I guess you will have to make some modification both in your setup.py and in one __init__ file in your module.
Let say the name of your package will be "module" and you have a functionality, sub for which you have pure python code in the sub subfolder and the equivalent C code in c_sub subfolder.
For example in your setup.py :
import logging
from setuptools.extension import Extension
from setuptools.command.build_ext import build_ext
from distutils.errors import CCompilerError, DistutilsExecError, DistutilsPlatformError
logging.basicConfig()
log = logging.getLogger(__file__)
ext_errors = (CCompilerError, DistutilsExecError, DistutilsPlatformError, IOError, SystemExit)
setup_args = {'name': 'module', 'license': 'BSD', 'author': 'xxx',
'packages': ['module', 'module.sub', 'module.c_sub'],
'cmdclass': {'build_ext': build_ext}
}
ext_modules = [Extension("module.c_sub._sub", ["module/c_sub/_sub.c"])]
try:
# try building with c code :
setup(ext_modules=ext_modules, **setup_args)
except ext_errors as ex:
log.warn(ex)
log.warn("The C extension could not be compiled")
## Retry to install the module without C extensions :
# Remove any previously defined build_ext command class.
if 'build_ext' in setup_args['cmdclass']:
del setup_args['cmdclass']['build_ext']
# If this new 'setup' call don't fail, the module
# will be successfully installed, without the C extension :
setup(**setup_args)
log.info("Plain-Python installation succeeded.")
Now you will need to include something like this in your __init__.py file (or at any place relevant in your case):
try:
from .c_sub import *
except ImportError:
from .sub import *
In this way the C version will be used if it was build, other-wise the plain python version is used. It assumes that sub and c_sub will provide the same API.
You can find an example of setup file doing this way in the Shapely package. Actually most of the code I posted was copied (the construct_build_ext function) or adapted (lines after) from this file.
Class Extension has parameter optional in constructor:
optional - specifies that a build failure in the extension should not
abort the build process, but simply skip the extension.
Here is also a link to the quite interesting history of piece of code proposed by mgc.
The question How should I structure a Python package that contains Cython code
is related, there the question is how to fallback from Cython to the "already generated C code". You could use a similar strategy to select which of the .py or the .pyx code to install.

Compile and use python-openzwave with open-zwave in non-standard location

I manually compiled python-openzwave to work with C++ library.
I would like to use it as Kodi addon (OpenELEC running on Pi 3), so can not use standard installation.
I've compiled everything, downloaded missing six and louie libs, and now try to run hello_world.py.
My current dirs structure is the following:
- root
- bin
- .lib
- config
Alarm.o
...
libopenzwave.a
libopenzwave.so
libopenzwave.so.1.4
...
- libopenzwave
driver.pxd
group.pxd
...
- louie
__init__.py
dispatcher.py
...
- openzwave
__init__.py
command.py
...
six.py
hello_world.py
But when I run hello_world.py, I get the following error -
Traceback (most recent call last):
File "hello_world.py", line 40, in <module>
from openzwave.controller import ZWaveController
File "/storage/.kodi/addons/service.multimedia.open-zwave/openzwave/controller.py", line 34, in <module>
from libopenzwave import PyStatDriver, PyControllerState
ImportError: No module named libopenzwave
If I move libopenzwave.a and libopenzwave.so to root folder, then I get the following error:
Traceback (most recent call last):
File "hello_world.py", line 40, in <module>
from openzwave.controller import ZWaveController
File "/storage/.kodi/addons/service.multimedia.open-zwave/openzwave/controller.py", line 34, in <module>
from libopenzwave import PyStatDriver, PyControllerState
ImportError: dynamic module does not define init function (initlibopenzwave)
What is wrong with my setup?
In general the steps required consist of calls to make build which handles building the .cpp files for openzwave and downloading all dependencies (including Cython); and make install which runs the setup-api, setup-lib.py (this setup script also creates the C++ Python extention for openzwave), setup-web.py and setup-manager.py.
Since you cannot run make install as you specified and are instead using the archive they provide, the only other options for creating the python extention, after building the openzwave library with make build, is generating the .so files for it without installing to standard locations.
Building the .so for the cython extention in the same folder as the Cython scripts is done by running:
python setup.py build_ext --inplace
This should create a shared library in src-lib named libopenzwave.so (it is different from the libopenzwave.so contained in the bin/ directory) which contains all the functionality specified in the extention module. You could try adding that to the libopenzwave folder.
If you pass special compiler flags during make build for building the openzwave library you should specify the same ones when executing the setup-lib.py script. This can be done by specifying the CFLAGS before executing it (as specified here) or else you might have issues like error adding symbols: File in wrong format.
Here's the description of the python-openzwave's build from the question's perspective. Almost all the steps correspond to the root Makefile's targets.
Prerequisites. There are several independent targets with little to no organization. Most use Debian-specific commands.
Cython is not needed if building from an archive (details below)
openzwave C++ library (openzwave openzwave/.lib/ target).
Build logic: openzwave/Makefile, invoked without parameters (but with inherited environment).
Inputs: openzwave/ subtree (includes libhidapi and libtinyxml, statically linked).
Outputs: openzwave/.lib/libopenzwave.{a,so}
Accepts PREFIX as envvar (/usr/local by default)
The only effect that affects us is: $(PREFIX)/etc/openzwave/ is assigned to a macro which adds a search location for config files (Options.cpp): config/ -> /etc/openzwave/ -> <custom location>.
libopenzwave Python C extension module (install-lib target - yes, the stock Makefile cannot just build it; the target doesn't even have the dependency on the library).
Build logic: setup-lib.py
Inputs: src-lib/, openzwave/.lib/libopenzwave.a
Outputs: build/<...>/libopenzwave.so - yes, the same name as openzwave's output, so avoid confusing them
By default, openzwave is linked statically with the module so you don't need to include the former into a deployment
The module does, however, need the config folder from the library. It is included by the build script when making a package.
Contrary to what Jim says, Cython is not needed to build from an archive, the archive already includes the generated .cpp.
Now, the catch is: the module itself uses pkg_resources to locate its data. So you cannot just drop the .so and config into the currect directory and call it a day. You need to make pkg_resources.get_distribution('libopenzwave') succeed.
pkg_resources claims to support "normal filesystem packages, .egg files, and unpacked .egg files."
In particular, I was able to pull this off: make an .egg (setup-lib.py bdist_egg), unpack it into the current directory and rename EGG-INFO into libopenzwave.egg-info (like it is in site-packages). A UserWarning is issued if I don't specifically add the .so's location into PYTHON_PATH/sys.path before importing the module.
openzwave,pyozwman and pyozwweb Python packages (install)
these are pure Python packages. The first one uses the C extension module, others use the first one.
Build logic: setup-api.py,setup-manager.py,setup-web.py
Input: src-*/
Output: (pure Python)
They only use pkg_resources.declare_namespace() so you're gonna be fine with just the proper files/dirs on sys.path without any .egg-info's

Categories