Python import statement with argument [duplicate] - python

This question already has answers here:
Python import functions from module twice with different internal imports
(3 answers)
Closed 4 years ago.
I am using numpy in one of my libraries. No surprise there.
One user would essentially like a copy of my project where I don't use the default numpy, but the one bundled with autograd. For instance, let's say I have a dumb function:
import numpy
def doSomething(x):
return numpy.sin(x)
They would like a copy of the library where all of these import numpy are replaced by from autograd import numpy:
from autograd import numpy
def doSomething(x):
return numpy.sin(x)
This would allow them to easily compute gradients and jacobians of my functions.
I would like to know what the easiest way to handle this is without copying the whole codebase and replacing all of these lines.
Options I am aware of:
I could make a copy of the codebase (lib and lib_autograd) where the first uses import numpy, and the second uses from autograd import numpy. This is bad because then I have to maintain two codebases.
I could automatically import from autograd if it is available:
try:
from autograd import numpy
except ImportError:
import numpy
The reason I do not want to do this is that many people have highly optimized numpy installs, whereas autograd might not. So I want to give the user an option which version to import. Forcing the user to use the autograd version if they have it seems bad since it would not be apparent to the user what is going on, and would require the user to uninstall autograd if they want to use the library with their default numpy installation.
So what are my options?
Ideally there would be a way of doing something like passing a parameter to the import statement (I do realize that you can't do this):
useAutograd = False
from lib(useAutograd) import doSomething

You can have 'conditional' import with:
try:
from autograd import numpy
except ImportError:
import numpy
One of other options is to have environment variable that switches whether you want to use numpy from autograd or regular one, because here you either use autograd.numpy (if it exists) or numpy. You don't have an option to use numpy if there is autograd module/package.
To elaborate on giving user an option to switch, here is one possibility:
import os
if os.environ.get('AUTOGRADNUMPY'):
try:
from autograd import numpy
except ImportError:
import numpy
else:
import numpy
Having environment variable AUTOGRADNUMPY set to True (or anything else that is not empty string) when you want to load numpy from autograd package. If it is not set or doesn't exist, regular numpy is imported.
All of this stands if user has at least numpy installed.

This might help:
try:
from autograd import numpy as np
except ImportError:
import numpy as np
...
...
np.sum(..)

Related

Why couldn't I import np.typing.NDArray, but now I can?

I'm running into a weird situation with Python imports.
Does someone know how this works?
I have:
import numpy as np
values: Union[Sequence[int], np.typing.NDArray]
probs: Union[Sequence[float], np.typing.NDArray]
​Now that fails because np.typing can't be imported this way. I guess since that is not defined in the init file?
Ok, so now I replace this with:
import numpy as np
import numpy.typing as npt
values: Union[Sequence[int], npt.NDArray]
probs: Union[Sequence[float], np.typing.NDArray]
​and now it works - but why doesn't it break on the 'probs' line? There, I still have the same statement that was giving me an error before. What changed to make this work?
Context: Numpy 1.21.3, Python 3.7
Note: I know I can simply replace both statements, but I was surprised by why this doesn't give an error and wanted to know how this worked.
After you run import
import numpy.typing as npt
interpreter "find out" addition parts of np.
You can check this with:
import numpy as np
len(dir(np)) # 602 (in my case)
import numpy.typing as npt
len(dir(np)) # 604

Python best practice for when an module is not always available

I have a Python code that runs on cuda. Now I need to support new deployment devices that cannot run cuda because they don't have Nvidia GPUs. Since I have many cupy imports in the code, I am wondering what is the best practice for this situation.
For instance, I might have to import certain classes based on the availability of cuda. This seems nasty. Is there any good programming pattern I can follow?
For instance, I would end up doing something like this:
from my_custom_autoinit import is_cupy_available
if is_cupy_available:
import my_module_that_uses_cupy
where my_custom_autoinit.py is:
try:
import cupy as cp
is_cupy_available = True
except ModuleNotFoundError:
is_cupy_available = False
This comes with a nasty drawback: every time I want to use my_module_that_uses_cupy I need to check if cupy is available.
I don't personally like this and I guess somebody came up with something better than this. Thank you
You could add a module called cupywrapper to your project, containing your try..except
cupywrapper.py
try:
import cupy as cp
is_cupy_available = True
except ModuleNotFoundError:
import numpy as cp
is_cupy_available = False
I'm assuming you can substitute cupy with numpy because from the website:
CuPy's interface is highly compatible with NumPy; in most cases it can be used as a drop-in replacement. All you need to do is just replace numpy with cupy in your Python code.
Then, in your code, you'd do:
import cupywrapper
cp = cupywrapper.cp
# Now cp is either cupy or numpy
x = [1, 2, 3]
y = [4, 5, 6]
z = cp.dot(x, y)
print(z)
print("cupy? ", cupywrapper.is_cupy_available)
On my computer, I don't have cupy installed and this falls back to numpy.dot, giving an output of
32
cupy? False

Can I import modules in python using a function

I am working on a small library and I need to know can I import modules like numpy, sklearn and etc. Using functions. For example:
def ml():
import numpy as np
import pandas as pd
x = np.array([1,2,647,345,3,7,3,8,36,64])
Is this possible ?
Simply can I import a module using a function and then use that later outside the function
The main idea is when the user calls the function ml he has all the modules related to machine learning imported and then he can use them. X = np.array was just kind of an example.
UPDATED
This should work
import importlib
def importmd(modulex):
return importlib.import_module(modulex) #Returning the module
np = importmd("numpy") #Same as import numpy as np

VSCode Itellisense with python C extension module (petsc4py)

I'm currently using a python module called petsc4py (https://pypi.org/project/petsc4py/). My main issue is that none of the typical intellisense features seems to work with this module.
I'm guessing it might have something to do with it being a C extension module, but I am not sure exactly why this happens. I initially thought that intellisense was unable to look inside ".so" files, but it seems that numpy is able to do this with the array object, which in my case is inside a file called multiarray.cpython-37m-x86_64-linux-gnu (check example below).
Does anyone know why I see this behaviour in the petsc4py module. Is there anything that I (or the developers of petsc4py) can do to get intellisense to work?
Example:
import sys
import petsc4py
petsc4py.init(sys.argv)
from petsc4py import PETSc
x_p = PETSc.Vec().create()
x_p.setSizes(10)
x_p.setFromOptions()
u_p = x_p.duplicate()
import numpy as np
x_n = np.array([1,2,3])
u_n = x_n.copy()
In this example, when trying to work with a Vec object from petsc4py, doing u_p.duplicate() cannot find the function and the suggestion is simply a repetition of the function immediately before. However, using an array from numpy, doing u_n.copy() works perfectly.
If you're compiling in-place then you're bumping up against https://github.com/microsoft/python-language-server/issues/197.

What is a difference between "pylab" and "matplotlib.pyplot"?

I try to use MatPlotLib and I have realized that can import it in two different ways and in both cases it works (in the same way): import pylab as p or import matplotlib.pyplot as p.
So, my question is what is the difference between these two ways?
From the official documentation:
Pylab combines the pyplot functionality (for plotting) with the numpy
functionality (for mathematics and for working with arrays) in a
single namespace, making that namespace (or environment) even more
MATLAB-like. For example, one can call the sin and cos functions just
like you could in MATLAB, as well as having all the features of
pyplot.
Note that pylab only imports from the top numpy namespace. Therefore, this will worK
import numpy
numpy.array # works
numpy.distutils # finds a module
And this will not
import pylab
pylab.array # works, is actually numpy array
pylab.distutils # gives an error

Categories