Under normal circumstance, external python modules such as scipy and numpy are compiled into shared objects when being installed (The part written in C). When python calls import scipy, it will dynamically load these shared objects.
Now I am working on a platform which does not support any dynamic loading function. As a result, I have to link those modules statically with python.
My current approach is to compile all source code of scipy/numpy with python, and call the module initialization function when python initializes.
Py_initializeEx(){
...
//init scipy modules statically
//below are scipy modules init functions
init_comb();
init_cython_special();
...
}
However, this brings me another problem. I found in many python module initialization functions, especially when they are auto generated from cython, they contain codes to import its parent packages. For example, the cython_special() calls import scipy, but when it is being called, the scipy initialization is not completed yet.
My question is, is there an easy way I can linked these modules statically? What is your suggestions to solve this problem?
Thanks.
PyImport_AppendInittab - this tells Python in advance of a module initialization function associated with a specific name. You'd identify all the modules you need to use that are compiled, link them statically, and then before Py_Initialize you add them to the Inittab.
Nothing happens until the module is imported at runtime when the correct initialization function is run.
If I got you right, what you could do is add a path to a dir where the modules will be located at.
import sys
sys.path.insert(0,'/path/to/modules')
from module1 import *
from module2 import *
etc.
Related
I wrote a custom python package for Ansible to handle business logic for some servers I manage. I have multiple files and they reference each other by re-importing the package.
So my package named <MyCustomPackage> has functions <Function1> <Function2> <Function3>, etc all in their own files... Some of these functions reference functions in the same package, so to do that the file has:
import MyCustomPackage
at the top. I did it this way instead of a relative import because I'm also unit testing these and mocking would not work with relative paths because of a __init__ file in the test directory which was needed for test discovery. The only way I could mock was through importing the package itself. Seemed simple enough.
The problem is with Ansible. These packages are in module_utils. I import them with:
from ansible.module_utils.MyCustomPackage import MyCustomPackage
but when I use the commands I get module not found errors - and traced it back to the import MyCustomPackage statement in the package itself.
So - how should I be structuring my package? Should I try again with relative file imports, or have the package modify the path so it's found with the friendly name?
Any tips would be helpful! Or if someone has a module they've written with Python modules in module_utils and unit tests that they'd be willing to share, that'd be great also!
Many people have problems with relative imports and imports in general in Python because they are ambiguous and surprisingly depend on your current working directory (and other things).
Thus I've created an experimental, new import library: ultraimport
It gives you more control over your imports and lets you do file system based, relative imports.
Given that you have a file function1.py, to import a function from function2.py, you would then write:
import ultraimport
Function2 = ultraimport('__dir__/function2.py', 'Function2')
This will always work, no matter how you run your code. It also does not force you to a specific package structure. You can just have any files you like.
I'm working on a project that requires C++ to call a program written in Python that relies on Python exclusive modules.
The project is handled using Qt Creator, and Python 3.7.5 and its packages are installed via Miniconda. I've gotten a basic embedding working using Pybind11 where basic interfacing works, however, most external modules cannot be imported.
For example, when importing Numpy through Pybind11, the following error is thrown (reduced for brevity):
Importing the numpy c-extensions failed.
Original error was: /home/brentnallt/miniconda3/envs/car_class_nogpu/lib/python3.7/site-packages/numpy/core/_multiarray_umath.cpython-37m-x86_64-linux-gnu.so: undefined symbol: PyMemoryView_FromObject
A similar error occurs when importing tensorflow through Pybind11:
ImportError: /home/brentnallt/miniconda3/envs/car_class_nogpu/lib/python3.7/lib-dynload/_ctypes.cpython-37m-x86_64-linux-gnu.so: undefined symbol: PyUnicode_FromFormat
It appears to be a problem with Python's C API being found when reading C extension shared libraries. However, modules like lxml which use C source files import just fine. Additionally, I can import problem modules in projects separate from the project I'm working on, implying it's a setup problem. Note that this test project setup doesn't actually use any QT functionality, whereas the main one does.
My PYTHONHOME environment variable looks like:
['/home/brentnallt/miniconda3/envs/car_class_nogpu/lib/python3.7', '/home/brentnallt/miniconda3/envs/car_class_nogpu/lib/python3.7/site-packages', '/home/brentnallt/miniconda3/envs/car_class_nogpu/lib/python37.zip', '/home/brentnallt/miniconda3/envs/car_class_nogpu/lib/python3.7/lib-dynload', '.']
Are there any special considerations I have to make when embedding with Qt Creator? Or is this likely a different problem from a setup error?
Maybe you can consider using PythonQt as an alternative module for calling and importing python libraries from Qt application.
I've used it a lot in my projects and it never failed, but never used it with any kinda data scientific modules maybe you could give it a chance
https://mevislab.github.io/pythonqt/
I am running Python 3.6.2 and trying to import other files into my shell prompt as needed. I have the following code inside my_file.py.
import numpy as np
def my_file(x):
s = 1/(1+np.exp(-x))
return s
From my 3.6.2 shell prompt I call
from my_file import my_file
But in my shell prompt if I want to use the library numpy I still have to import numpy into the shell prompt even though I have imported a file that imports numpy. Is this functionality by design? Or is there a way to import numpy once?
import has three completely separate effects:
If the module has not yet been imported in the current process (by any script or module), execute its code (usually from disk) and store a module object with the resulting classes, functions, and variables.
If the module is in a package, (import the package first, and) store the new module as an attribute on the containing package (so that references like scipy.special work).
Assign the module ultimately imported to a variable in the invoking scope. (import foo.bar assigns foo; import baz.quux as frob assigns baz.quux to the name frob.)
The first two effects are shared among all clients, while the last is completely local. This is by design, as it avoids accidentally using a dependency of an imported module without making sure it’s available (which would break later if the other modules changed what they imported). It also lets different clients use different shorthands.
As hpaul noted, you can use another module’s imports with a qualified name, but this is abusing the module’s interface just like any other use of a private name unless (like six.moves, for example, or os.path which is actually not a module at all) the module intends to publish names for other modules.
I am writing an application in C# with VisualStudio and am using IronPython to write some Python scripts for my application. However, it does not have the entire standard library support by default. So to import some modules (such as os) I need to point my C# code to where the os module actually is. I also understand that it will still be limited to libraries implemented in pure python.
Ultimately I want to have something that can be installed on another machine. My current workaround is to include a copy of https://github.com/python/cpython/tree/2.7/Lib in the Debug folder where the executable is running and it seems excessive/unnecessary to have to include the entire thing. I tried just placing the files I need (for example os.py) here but obviously it imports other modules, which import other modules, etc... I would have to re-run the code to get the error for which module it couldn't find and add them in 1 by 1 and it was getting too tedious.
I was wondering if there was any sort of resource that specifies the relationships between standard library modules and could tell me exactly what files to copy. Essentially what I'm looking for is the graph of the standard library imports. So if I want to import os in these scripts I know to copy os.py, ntpath.py, ...
Thanks
you probably don't need the imports as a tree, but as a simple list, so you can just copy the needed files. You can get that from sys.modules, after you import everything that your script needs - it will contain all modules needed by those that you imported, e.g.:
import sys # even if you don't use it - it's a built-in module, won't add a file to the list, needed to get sys.modules
import os
import time
#import whatever-else
# this gives a list of tuples (module,file)
m=[(z,x.__file__) for z,x in sys.modules.items() if hasattr(x,"__file__") ]
for x in m:
print x[0],x[1]
I'm writing a Python package that does GPU computing using the PyCUDA library. PyCUDA needs to initialize a GPU device (usually by importing pycuda.autoinit) before any of its submodules can be imported.
In my own modules I import whatever submodules and functions I need from PyCUDA, which means that my own modules are not importable without first initializing PyCUDA. That's fine mostly, because my package does nothing useful without a GPU present. However, now I want to write documentation and Sphinx Autodoc needs to import my package to read the docstrings. It works fine if I put import pycuda.autoinit into docs/conf.py, but I would like for the documentation to be buildable on machines that don't have an NVIDIA GPU such as my own laptop or readthedocs.org.
What's the most elegant way to defer the of import my dependencies such that I can import my own submodules on machines that don't have all the dependencies installed?
The autodoc mechanism requires that all modules to be documented are importable. When this requirement is a problem, mocking (replacing parts of the system with mock objects) can be a solution.
Here is an article that explains how mock objects can be used when working with Sphinx: http://blog.rtwilson.com/how-to-make-your-sphinx-documentation-compile-with-readthedocs-when-youre-using-numpy-and-scipy/.
The gist of the article is that it should work if you add something like this to conf.py:
import mock # See http://www.voidspace.org.uk/python/mock/
MOCK_MODULES = ['module1', 'module2', ...]
for mod_name in MOCK_MODULES:
sys.modules[mod_name] = mock.Mock()
The usual method I've seen is to have a module-level function like foo.init() that sets up the GPU/display/whatever that you need at runtime but don't want automatically initialized on import.
You might also consider exposing initialization options here: what if I have 2 CUDA-capable GPUs, but only want to use one of them?