new to Python programming and have encountered an issue importing modules.
I have a main application (compare.py) with imports as follows :
# import the necessary packages
from skimage.measure import structural_similarity as ssim
import matplotlib.pyplot as plt
import numpy as np
import os
import skimage
from skimage import io
from skimage import color
from epilib import mse
from epilib import compare_images
and I have defined two functions in epilib, one called mse() and one called compare_images().
The code in mse() requires numpy. When I execute 'python compare.py', I get the following error message :
File "C:\Users\Dan\epilib.py", line 7, in mse err = np.sum((imageA.astype("float") - imageB.astype("float")) ** 2)
NameError: name 'np' is not defined
I assumed that because 'import numpy as np' was executed prior to import epilib, that the numpy library would be available to epilib? When I added 'import numpy as np' to the top of epilib, the issue resolved.
I don't see it as very efficient to have to move all the import statements to epilib. I was hoping to have epilib as just a library of functions and I could import into various python programs as required.
Is there a way to accomplish this?
That is not how python works, if you want to use numpy library in a module (in this case in eplib module), you need to import it in that module as well, eplib would get not the numpy module imported in your compare.py .
You should import numpy in eplib.py as -
import numpy as np
I do not think there would be any issue in efficiency, since once python imports a module for the first time, it caches the module in sys.modules , so whenever you re-import it (even if its in a different module) as long as its the same python process , Python would not re-import it, instead it would return the module object from sys.modules .
Related
I have a script which is importing lots of packages, including import numpy as np.
I have lots of scripts which need to import all of these packages (including some of my own). To make my life easier, I have a file called mysetup.py in my path to import all the packages. It includes the statement in a function called "import numpy as np".
I run "main.py". It runs the following
from mysetup import *
import_my_stuff()
np.pi()
"mysetup.py"
def import_my_stuff():
import numpy as np
return
However, I am unable to use numpy in "main.py" - this code will fail. Any suggestions as to why?
The problem you are facing is a consequence of a very important features of Python: namespaces.
https://docs.python.org/3/tutorial/classes.html#python-scopes-and-namespaces
https://realpython.com/python-namespaces-scope/
Basically, in your case, when you do that (numpy) import inside the (import_my_stuff) function, you are defining the code object numpy/np inside the function namespace. (scope, if you prefer).
To solve your issue (the way you are doing; not the only way), you should simply import everything at the module top level (without a function encapsulating the imports):
mysetup.py:
import numpy as np
# other modules...
main.py:
from mysetup import *
np.pi()
Imports in functions are not the best idea.
But you can just define whatever imports you need in top level code of mysetup.py
import numpy as np
and then it will be available when you import * from mysetup
from mysetup import *
print(np.pi)
I looked at the file "pylab.py" at matplotlab's directory and found that it contains a great bunch of imports, and then defines a single variable "bytes" at the last line. Here is the last several lines of this file:
from numpy.fft import *
from numpy.raenter code herendom import *
from numpy.linalg import *
import numpy as np
import numpy.ma as ma
# don't let numpy's datetime hide stdlib
import datetime
# This is needed, or bytes will be numpy.random.bytes from
# "from numpy.random import *" above
bytes = six.moves.builtins.bytes
I wonder what's the purpose of such a file when it only defines a seemingly useless variable. As a result, what's the purpose of writing code like from matplotlib import pylab?
The matplotlib docs say:
pylab is a convenience module that bulk imports matplotlib.pyplot (for plotting) and numpy (for mathematics and working with arrays) in a single name space. Although many examples use pylab, it is no longer recommended.
So for example, you can do
>>> from pylab import *
And you have imported all the names imported by pylab into your local namespace. This is convenient when using the interactive shell.
Additionally, pylab imports datetime and bytes. This is because the from numpy.foo import * statements import numpy objects named bytes and datetime which are not the same as the standard python objects with these names, so they need to be overridden with the standard versions.
The practice of importing names into a module just so other modules can import them from there instead of the original module is not unusual. For example, given this module:
$ cat foo/__init__.py
from bar import *
from baz.quux import *
from spam import eggs
Other modules can do from foo import eggs rather than from foo.spam import eggs. Apart from the convenience of less typing, this approach hides the internal structure of the foo package from its clients. As long as they import from the top level module they need not be concerned that the internal structure of the package may change over time. This is a form of the facade design pattern.
Aloha!
I have two blocks of code, one that will work and one that will not. The only difference is a commented line of code for a numpy module I don't use. Why am I required to import that model when I never reference "npm"?
This command works:
import numpy as np
import numpy.matlib as npm
V = np.array([[1,2,3],[4,5,6],[7,8,9]])
P1 = np.matlib.identity(V.shape[1], dtype=int)
P1
This command doesn't work:
import numpy as np
#import numpy.matlib as npm
V = np.array([[1,2,3],[4,5,6],[7,8,9]])
P1 = np.matlib.identity(V.shape[1], dtype=int)
P1
The above gets this error:
AttributeError: 'module' object has no attribute 'matlib'
Thanks in advance!
Short Answer
This is because numpy.matlib is an optional sub-package of numpy that must be imported separately.
The reason for this feature may be:
In particular for numpy, the numpy.matlib sub-module redefines numpy's functions to return matrices instead of ndarrays, an optional feature that many may not want
More generally, to load the parent module without loading a potentially slow-to-load module which many users may not often need
Possibly, namespace separation
When you import just numpy without the sub-package matlib, then Python will be looking for .matlib as an attribute of the numpy package. This attribute has not been assigned to numpy without importing numpy.matlib (see discussion below)
Sub-Modules and Binding
If you're wondering why np.matlib.identity works without having to use the keyword npm, that's because when you import the sub-module matlib, the parent module numpy (named np in your case) will be given an attribute matlib which is bound to the sub-module. This only works if you first define numpy.
From the reference:
When a submodule is loaded using any mechanism (e.g. importlib APIs, the import or import-from statements, or built-in import()) a binding is placed in the parent module’s namespace to the submodule object.
Importing and __init__.py
The choice of what to import is determined in the modules' respective __init__.py files in the module directory. You can use the dir() function to see what names the respective modules define.
>> import numpy
>> 'matlib' in dir(numpy)
# False
>> import numpy.matlib
>> 'matlib' in dir(numpy)
# True
Alternatively, if you look directly at the __init__.py file for numpy you'll see there's no import for matlib.
Namespace across Sub-Modules
If you're wondering how the namespace is copied over smoothly;
The matlib source code runs this command to copy over the numpy namespace:
import numpy as np # (1)
...
# need * as we're copying the numpy namespace
from numpy import * # (2)
...
__all__ = np.__all__[:] # copy numpy namespace # (3)
Line (2), from numpy import * is particularly important. Because of this, you'll notice that if you just import numpy.matlib you can still use all of numpy modules without having to import numpy!
Without line (2), the namespace copy in line (3) would only be attached to the sub-module. Interestingly, you can still do a funny command like this because of line (3).
import numpy.matlib
numpy.matlib.np.matlib.np.array([1,1])
This is because the np.__all__ is attached to the np of numpy.matlib (which was imported via line (1)).
You never use npm but you do use np.matlib, so you could change your 2nd import line to just:
import numpy.matlib
Or you could keep your 2nd import line as is but instead use:
P1 = npm.identity(V.shape[1], dtype=int)
Is there are reason you don't use np.identity?
P1 = np.identity(V.shape[1], dtype=int)
This module contains all functions in the numpy namespace, with the following replacement functions that return matrices instead of ndarrays.
Unless you are wedded to 2d np.matrix subclass, you are better off sticking with the regular ndarray versions.
(Others have pointed out that the import why is based on the __init__ specs for numpy. numpy imports most, but not all of its submodules. The ones it does not automatically import are used less often. It's a polite way of saying, You don't really need this module)
I am trying to write a function, which is itself loaded, to quickly import a bunch of modules globally.
I thought that, essentially, loaded modules could be treated as variables so I tried:
def loadMods():
global np
import numpy as np
and when I loaded numpy (calling np) there was no problem.
What I then did was to create a separate .py file called loadTest containing
# loadTest module
# coding: utf-8
def loadMod():
global np
import numpy as np
Then attempted to import numpy using this .py file in python (2.7):
import loadTest
loadTest.loadMod()
but now when attempting calling np I get
File "<stdin>", line 1, in <module>
NameError: name 'np' is not defined
Why does this occur? Any help or alternative ways of doing this would be much appreciated. Thanks a bunch :)
Instead of making a function to do this, why not make another module? You could name it something like modules.py and put all of your imports in there:
import numpy as np
import os
import sys
...
Then, all you need to do is a wildcard import:
from modules import *
and everything will be made available.
You must first define np like that.
In loadTest:
np=None
In somewhere other
import loadTest
loadTest.loadMod()
np=loadTest.np
I have a script which runs as a standalone program, however I'd like to be able to use it as a callable function as well. Currently when i try and run it from another script, i get errors saying that certain modules are not defined/imported. For example:
NameError: global name 'exp' is not defined
Here's an example of my code that produces the error:
from PostREC3 import * ##import the required functions from the module
from numpy import array, shape, math, loadtxt, log10, vstack, arange
from scipy.integrate import quad
from pylab import all
from numpy import pi as pi
from assimulo.solvers.sundials import IDA
from assimulo.problem import Implicit_Problem
from math import exp, log10, fabs, atan, log
import pickle
import sys
results = PostREC(2,100,90,1.0,1, 1,"0",2 ) #run an imported function
output:
NameError: global name 'exp' is not defined
I've tried importing exp from within the function itself, however that doesn't change anything. As far as I'm aware, as long as I've imported them before using the function then they should be available for any other functions to use. So, is there something wrong with what I'm doing, or does this point to another error within the code itself?
O/S: Ubuntu 12.10
Python 2.7 64 bit
Import exp and any other module/function you need at the top of your PostREC3 module, not whithin a particular function.
Imports are not "global", each module needs to import everything it needs to run, even if another module already did so.