There is one thing, that I do not understand.
Why does this
import scipy # happens with several other modules, too. I took scipy as an example now...
matrix = scipy.sparse.coo_matrix(some_params)
produce this error:
AttributeError: 'module' object has no attribute 'sparse'
This happens because the scipy module doesn't have any attribute named sparse. That attribute only gets defined when you import scipy.sparse.
Submodules don't automatically get imported when you just import scipy; you need to import them explicitly. The same holds for most packages, although a package can choose to import its own submodules if it wants to. (For example, if scipy/__init__.py included a statement import scipy.sparse, then the sparse submodule would be imported whenever you import scipy.)
Because you imported scipy, not sparse. Try from scipy import sparse?
AttributeError is raised when attribute of the object is not available.
An attribute reference is a primary followed by a period and a name:
attributeref ::= primary "." identifier
To return a list of valid attributes for that object, use dir(), e.g.:
dir(scipy)
So probably you need to do simply: import scipy.sparse
The default namespace in Python is "__main__". When you use import scipy, Python creates a separate namespace as your module name.
The rule in Pyhton is: when you want to call an attribute from another namespaces you have to use the fully qualified attribute name.
Related
I'm quite new at Python, but I'm experienced with other programming languages.
I have this line of code:
import module as m
If I try to access its functions:
m.spam.foo()
m.eggs.bar()
I get the following errors:
AttributeError: module 'module' has no attribute 'spam'
AttributeError: module 'module' has no attribute 'eggs'
On the other hand, if I write:
import module.spam, module.eggs
and then:
module.spam.foo()
module.eggs.bar()
It works properly.
Same if I write:
from module import spam, eggs
and:
spam.foo()
eggs.bar()
Works with no errors.
I prefer the first method since I don't have to manually import every single submodule before using it...
So, what's wrong with that method? Is there a workaround to make it work (I do not like the from module import * approach since they can be confused with my global variables)?
I searched everywhere inside the reference but could not find anything.
I don't know how well this solves your problem, because this code is actually longer than the from module import spam, eggs method, but this question might solve your problem. Basically, you would use an __init__ function to load all submodules.
Most of the tutorials and books about Django or Flask import specific classes from files instead of importing the whole file.
For example, importing DataRequiered validator from wrtforms.validators is done via from wtforms import validators instead of importing it via import wtforms.validators as valids and then accessing DataRequiered with valids.DataRequiered.
My question is: Is there an reason for this ?
I thought to something like avoiding the loading a whole module for computation/memory optimization (is it really relevant?) ? Or is it simply to make the code more readable ?
My question is: Is there an reason for this ?
from module_or_package import something is the canonical pythonic idiom (when you only want to import something in your current namespace of course).
Also, import module_or_package.something only works if module_or_package is a package and something a submodule, it raises an ImportError(No module named something) if something is a function, class or whatever object defined in module_or_package, as can be seen in the stdlib with os.path (which is a submodule of the os.package) vs datetime.date (which is a class defined in the datetime module):
>>> import os.path as p
>>> p
<module 'posixpath' from '/home/bruno/.virtualenvs/blook/lib/python2.7/posixpath.pyc'>
vs
>>>import datetime.date as d
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named date
thought to something like avoiding the loading a whole module for computation/memory optimization (is it really relevant?)
Totally irrelevant - importing a given name from a module requires importing the whole module. Actually, this:
from module_or_package import something
is only syntactic sugar for
import module_or_package
something = module_or_package.something
del module_or_package
EDIT: You mention in a comment that
Right, but importing the whole module means loading it to the memory, which can be a reason for importing only a submodule/class
so it seems I failed to make the point clear: in Python, you can not "import only a submodule/class", period.
In Python, import, class and def are all executable statements (and actually just syntactic sugar for operation you can do 'manually' with functions and classes). Importing a module actually consists in executing all the code at the module's top-level (which will instanciate function and class objects) and create a module object (instance of module type) which attributes will be all names defined at the top-level via import, def and class statements or via explicit assignment. It's only when all this has been done that you can access any name defined in the module, and this is why, as I stated above,
from module import obj
is only syntactic sugar for
import module
obj = module.obj
del module
But (unless you do something stupid like defining a terabyte-huge dict or list in your module) this doesn't actually take that much time nor eat much ram, and a module is only effectively executed once per process the first time it's imported - then it's cached in sys.modules so subsequent imports only fetch it from cache.
Also, unless you actively prevents it, Python will cache the compiled version of the module (the .pyc files) and only recompile it if the .pyc is missing or older than the source .py file.
wrt/ packages and submodules, importing a submodule will also execute the package's __init__.py and build a module instance from it (IOW, at runtime, a package is also a module). Package initializer are canonically rather short, and actually quite often empty FWIW...
It depends, in the tutorial that was probably done for readability
Usually if you use most of the classes in a file, you import the file. If the files contains many classes but you only need a few, just import those.
It's both a matter of readability and optimization.
There is one thing, that I do not understand.
Why does this
import scipy # happens with several other modules, too. I took scipy as an example now...
matrix = scipy.sparse.coo_matrix(some_params)
produce this error:
AttributeError: 'module' object has no attribute 'sparse'
This happens because the scipy module doesn't have any attribute named sparse. That attribute only gets defined when you import scipy.sparse.
Submodules don't automatically get imported when you just import scipy; you need to import them explicitly. The same holds for most packages, although a package can choose to import its own submodules if it wants to. (For example, if scipy/__init__.py included a statement import scipy.sparse, then the sparse submodule would be imported whenever you import scipy.)
Because you imported scipy, not sparse. Try from scipy import sparse?
AttributeError is raised when attribute of the object is not available.
An attribute reference is a primary followed by a period and a name:
attributeref ::= primary "." identifier
To return a list of valid attributes for that object, use dir(), e.g.:
dir(scipy)
So probably you need to do simply: import scipy.sparse
The default namespace in Python is "__main__". When you use import scipy, Python creates a separate namespace as your module name.
The rule in Pyhton is: when you want to call an attribute from another namespaces you have to use the fully qualified attribute name.
I have Python 2.7.3 installed on my Windows 7 computer. When I run the following code
import nltk, json, cPickle, itertools
import numpy as np
from nltk.tokenize import word_tokenize
from pprint import pprint
t_given_a = json.load(open('conditional_probability.json','rb'))
a_unconditional = json.load(open('age.json','rb'))
t_unconditional = cPickle.load(open('freqdist.pkl','rb'))['distribution']
The command prompt gives me the error
"ImportError: No Module named Multiarray."
I'm fairly new to Python and I'm not exactly sure why this error happened. I searched other threads and many suggested to use 'rb' instead of 'r'. I have rb to begin with and it's still throwing me that error. Any suggestion?
When you pickle an object in python it saves its class as a string of package name + class name. On unpickle python tries to import that module and find that class for you to recreate an object. And if you don't have that module importable you'll get an ImportError.
Just install that Multiarray module, and if you don't know which is it then ask whoever you got that pickle file from.
From the docs:
Note that functions (built-in and user-defined) are pickled by “fully
qualified” name reference, not by value. This means that only the
function name is pickled, along with the name of the module the
function is defined in. Neither the function’s code, nor any of its
function attributes are pickled. Thus the defining module must be
importable in the unpickling environment, and the module must contain
the named object, otherwise an exception will be raised.
Similarly, classes are pickled by named reference, so the same restrictions in
the unpickling environment apply. Note that none of the class’s code
or data is pickled
[...] These restrictions are why picklable functions and classes must be
defined in the top level of a module
I need a function that returns the name of the package of the module from which the function was called. Getting the module's name is easy:
import inspect
module_name = inspect.currentframe().f_back.f_globals['__name__']
And stripping the last part to get the module's package is also easy:
package_name = '.'.join(module_name.split('.')[:-1])
But if the function is called from a package's __init__.py, the last part of the name should not be stripped. E.g. if called from foo/bar/__init__.py, module_name in the above example will be set to 'foo.bar', which is already the name of the package.
How can I check, from the module name or the module object, whether it refers to a package or a module?
The best way I found is getting the module object's __file__ attribute, if it exists, and check whether it points to a file whose name is __init__ plus extension. But this seems very brittle to me.
from the module object:
module.__package__
is available for a couple that I looked at, and appears to be correct. it may not give you exactly what you want though:
import os.path
import httplib2
import xml.etree.ElementTree as etree
os.path.__package__
<blank... this is a builtin>
httplib2.__package__
'httplib2'
etree.__package__
'xml.etree'
you can use
function.__module__
also, but that gives you the module name, not the actual module - not sure if there is an easier way to get the module object itself than importing again to a local variable.
etree.parse.__module__
'xml.etree.ElementTree'
os.path.split.__module__
'ntpath'
The good thing is this appears to be more correct, following the actual location of things, e.g.:
httplib2.Response.__module__
'httplib2'
httplib2.copy.copy.__module__
'copy'
httplib2.urlparse.ParseResult.__module__
'urlparse'
etc.
This is what importlib.__import__() does, which needs to re-implement most of the Python's built-in import logic and needs to find a module's package to support relative imports:
# __package__ is not guaranteed to be defined or could be set to None
# to represent that it's proper value is unknown
package = globals.get('__package__')
if package is None:
package = globals['__name__']
if '__path__' not in globals:
package = package.rpartition('.')[0]
module = _gcd_import(name, package, level)
So it seems that there is no reliable "direct" way to get a module's package.