I'm writing a piece of software over on github. It's basically a tray icon with some extra features. I want to provide a working piece of code without actually having to make the user install what are essentially dependencies for optional features and I don't actually want to import things I'm not going to use so I thought code like this would be "good solution":
---- IN LOADING FUNCTION ----
features = []
for path in sys.path:
if os.path.exists(os.path.join(path, 'pynotify')):
features.append('pynotify')
if os.path.exists(os.path.join(path, 'gnomekeyring.so')):
features.append('gnome-keyring')
#user dialog to ask for stuff
#notifications available, do you want them enabled?
dlg = ConfigDialog(features)
if not dlg.get_notifications():
features.remove('pynotify')
service_start(features ...)
---- SOMEWHERE ELSE ------
def service_start(features, other_config):
if 'pynotify' in features:
import pynotify
#use pynotify...
There are some issues however. If a user formats his machine and installs the newest version of his OS and redeploys this application, features suddenly disappear without warning. The solution is to present this on the configuration window:
if 'pynotify' in features:
#gtk checkbox
else:
#gtk label reading "Get pynotify and enjoy notification pop ups!"
But if this is say, a mac, how do I know I'm not sending the user on a wild goose chase looking for a dependency they can never fill?
The second problem is the:
if os.path.exists(os.path.join(path, 'gnomekeyring.so')):
issue. Can I be sure that the file is always called gnomekeyring.so across all the linux distros?
How do other people test these features? The problem with the basic
try:
import pynotify
except:
pynotify = disabled
is that the code is global, these might be littered around and even if the user doesn't want pynotify....it's loaded anyway.
So what do people think is the best way to solve this problem?
The try: method does not need to be global — it can be used in any scope and so modules can be "lazy-loaded" at runtime. For example:
def foo():
try:
import external_module
except ImportError:
external_module = None
if external_module:
external_module.some_whizzy_feature()
else:
print("You could be using a whizzy feature right now, if you had external_module.")
When your script is run, no attempt will be made to load external_module. The first time foo() is called, external_module is (if available) loaded and inserted into the function's local scope. Subsequent calls to foo() reinsert external_module into its scope without needing to reload the module.
In general, it's best to let Python handle import logic — it's been doing it for a while. :-)
You might want to have a look at the imp module, which basically does what you do manually above. So you can first look for a module with find_module() and then load it via load_module() or by simply importing it (after checking the config).
And btw, if using except: I always would add the specific exception to it (here ImportError) to not accidently catch unrelated errors.
Not sure if this is good practice, but I created a function that does the optional import (using importlib) and error handling:
def _optional_import(module: str, name: str = None, package: str = None):
import importlib
try:
module = importlib.import_module(module)
return module if name is None else getattr(module, name)
except ImportError as e:
if package is None:
package = module
msg = f"install the '{package}' package to make use of this feature"
raise ValueError(msg) from e
If an optional module is not available, the user will at least get the idea what to do. E.g.
# code ...
if file.endswith('.json'):
from json import load
elif file.endswith('.yaml'):
# equivalent to 'from yaml import safe_load as load'
load = _optional_import('yaml', 'safe_load', package='pyyaml')
# code using load ...
The main disadvantage with this approach is that your imports have to be done in-line and are not all on the top of your file. Therefore, it might be considered better practice to use a slight adaptation of this function (assuming that you are importing a function or the like):
def _optional_import_(module: str, name: str = None, package: str = None):
import importlib
try:
module = importlib.import_module(module)
return module if name is None else getattr(module, name)
except ImportError as e:
if package is None:
package = module
msg = f"install the '{package}' package to make use of this feature"
import_error = e
def _failed_import(*args):
raise ValueError(msg) from import_error
return _failed_import
Now, you can make the imports with the rest of your imports and the error will only be raised when the function that failed to import is actually used. E.g.
from utils import _optional_import_ # let's assume we import the function
from json import load as json_load
yaml_load = _optional_import_('yaml', 'safe_load', package='pyyaml')
# unimportant code ...
with open('test.txt', 'r') as fp:
result = yaml_load(fp) # will raise a value error if import was not successful
PS: sorry for the late answer!
Another option is to use #contextmanager and with. In this situation, you do not know beforehand which dependencies are needed:
from contextlib import contextmanager
#contextmanager
def optional_dependencies(error: str = "ignore"):
assert error in {"raise", "warn", "ignore"}
try:
yield None
except ImportError as e:
if error == "raise":
raise e
if error == "warn":
msg = f'Missing optional dependency "{e.name}". Use pip or conda to install.'
print(f'Warning: {msg}')
Usage:
with optional_dependencies("warn"):
import module_which_does_not_exist_1
import module_which_does_not_exist_2
z = 1
print(z)
Output:
Warning: Missing optional dependency "module_which_does_not_exist_1". Use pip or conda to install.
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
Cell In [43], line 5
3 import module_which_does_not_exist_2
4 z = 1
----> 5 print(z)
NameError: name 'z' is not defined
Here, you should define all your imports immediately after with. The first module which is not installed will throw ImportError, which is caught by optional_dependencies. Depending on how you want to handle this error, it will either ignore it, print a warning, or raise it again.
The entire code will only run if all the modules are installed.
Here's a production-grade solution, using importlib and Pandas's import_optional_dependency as suggested by #dre-hh
from typing import *
import importlib, types
def module_exists(
*names: Union[List[str], str],
error: str = "ignore",
warn_every_time: bool = False,
__INSTALLED_OPTIONAL_MODULES: Dict[str, bool] = {}
) -> Optional[Union[Tuple[types.ModuleType, ...], types.ModuleType]]:
"""
Try to import optional dependencies.
Ref: https://stackoverflow.com/a/73838546/4900327
Parameters
----------
names: str or list of strings.
The module name(s) to import.
error: str {'raise', 'warn', 'ignore'}
What to do when a dependency is not found.
* raise : Raise an ImportError.
* warn: print a warning.
* ignore: If any module is not installed, return None, otherwise,
return the module(s).
warn_every_time: bool
Whether to warn every time an import is tried. Only applies when error="warn".
Setting this to True will result in multiple warnings if you try to
import the same library multiple times.
Returns
-------
maybe_module : Optional[ModuleType, Tuple[ModuleType...]]
The imported module(s), if all are found.
None is returned if any module is not found and `error!="raise"`.
"""
assert error in {"raise", "warn", "ignore"}
if isinstance(names, (list, tuple, set)):
names: List[str] = list(names)
else:
assert isinstance(names, str)
names: List[str] = [names]
modules = []
for name in names:
try:
module = importlib.import_module(name)
modules.append(module)
__INSTALLED_OPTIONAL_MODULES[name] = True
except ImportError:
modules.append(None)
def error_msg(missing: Union[str, List[str]]):
if not isinstance(missing, (list, tuple)):
missing = [missing]
missing_str: str = ' '.join([f'"{name}"' for name in missing])
dep_str = 'dependencies'
if len(missing) == 1:
dep_str = 'dependency'
msg = f'Missing optional {dep_str} {missing_str}. Use pip or conda to install.'
return msg
missing_modules: List[str] = [name for name, module in zip(names, modules) if module is None]
if len(missing_modules) > 0:
if error == "raise":
raise ImportError(error_msg(missing_modules))
if error == "warn":
for name in missing_modules:
## Ensures warning is printed only once
if warn_every_time is True or name not in __INSTALLED_OPTIONAL_MODULES:
print(f'Warning: {error_msg(name)}')
__INSTALLED_OPTIONAL_MODULES[name] = False
return None
if len(modules) == 1:
return modules[0]
return tuple(modules)
Usage: ignore errors (error="ignore", default behavior)
Suppose we want to run certain code only if the required libraries exists:
if module_exists("pydantic", "sklearn"):
from pydantic import BaseModel
from sklearn.metrics import accuracy_score
class AccuracyCalculator(BaseModel):
num_decimals: int = 5
def calculate(self, y_pred: List, y_true: List) -> float:
return round(accuracy_score(y_true, y_pred), self.num_decimals)
print("Defined AccuracyCalculator in global context")
If either dependencies pydantic or skelarn do not exist, then the class AccuracyCalculator will not be defined and the print statement will not run.
Usage: raise ImportError (error="raise")
Alternatively, you can raise a error if any module does not exist:
if module_exists("pydantic", "sklearn", error="raise"):
from pydantic import BaseModel
from sklearn.metrics import accuracy_score
class AccuracyCalculator(BaseModel):
num_decimals: int = 5
def calculate(self, y_pred: List, y_true: List) -> float:
return round(accuracy_score(y_true, y_pred), self.num_decimals)
print("Defined AccuracyCalculator in global context")
Output:
line 60, in module_exists(error, __INSTALLED_OPTIONAL_MODULES, *names)
58 if len(missing_modules) > 0:
59 if error == "raise":
---> 60 raise ImportError(error_msg(missing_modules))
61 if error == "warn":
62 for name in missing_modules:
ImportError: Missing optional dependencies "pydantic" "sklearn". Use pip or conda to install.
Usage: print a warning (error="warn")
Alternatively, you can print a warning if the module does not exist.
if module_exists("pydantic", "sklearn", error="warn"):
from pydantic import BaseModel
from sklearn.metrics import accuracy_score
class AccuracyCalculator(BaseModel):
num_decimals: int = 5
def calculate(self, y_pred: List, y_true: List) -> float:
return round(accuracy_score(y_true, y_pred), self.num_decimals)
print("Defined AccuracyCalculator in global context")
if module_exists("pydantic", "sklearn", error="warn"):
from pydantic import BaseModel
from sklearn.metrics import roc_auc_score
class RocAucCalculator(BaseModel):
num_decimals: int = 5
def calculate(self, y_pred: List, y_true: List) -> float:
return round(roc_auc_score(y_true, y_pred), self.num_decimals)
print("Defined RocAucCalculator in global context")
Output:
Warning: Missing optional dependency "pydantic". Use pip or conda to install.
Warning: Missing optional dependency "sklearn". Use pip or conda to install.
Here, we ensure that only one warning is printed for each missing module, otherwise you would get a warning each time you try to import.
This is very useful for Python libraries where you might try to import the same optional dependencies many times, and only want to see one Warning.
You can pass warn_every_time=True to always print the warning when you try to import.
I'm really excited to share this new technique I came up with to handle optional dependencies!
The concept is to produce the error when the uninstalled package is used not imported.
Just add a single call before your imports. You don't need to change any code at all. No more using try: when importing. No more using conditional skip decorators when writing tests.
Main components
An importer to return a fake module for missing imports
A fake module that raises an exception when it's used
A custom Exception that will skip tests automatically if raised within one
Minimal Code Example
import sys
import importlib
from unittest.case import SkipTest
from _pytest.outcomes import Skipped
class MissingOptionalDependency(SkipTest, Skipped):
def __init__(self, msg=None):
self.msg = msg
def __repr__(self):
return f"MissingOptionalDependency: {self.msg}" if self.msg else f"MissingOptionalDependency"
class GeneralImporter:
def __init__(self, *names):
self.names = names
sys.meta_path.insert(0, self)
def find_spec(self, fullname, path=None, target=None):
if fullname in self.names:
return importlib.util.spec_from_loader(fullname, self)
def create_module(self, spec):
return FakeModule(name=spec.name)
def exec_module(self, module):
pass
class FakeModule:
def __init__(self, name):
self.name = name
def __call__(self, *args, **kwargs):
raise MissingOptionalDependency(f"Optional dependency '{self.name}' was used but it isn't installed.")
GeneralImporter("notinstalled")
import notinstalled # No error
print(notinstalled) # <__main__.FakeModule object at 0x0000014B7F6D9E80>
notinstalled() # MissingOptionalDependency: Optional dependency 'notinstalled' was used but it isn't installed.
Package
The technique above has some shortcomings that my package fixes.
It's open-source, lightweight, and has no dependencies!
Some key differences to the example above:
Covers more than 100 dunder methods (All tested)
Covers 15 common dunder attribute lookups
Entry function is generalimport which returns an ImportCatcher
ImportCatcher holds names, scope, and caught names
It can be enabled and disabled
The scope prevents external packages from being affected
Wildcard support to allow any package to be imported
Puts the importer first in sys.meta_path
Lets it catch namespace imports (Usually occurs with uninstalled packages)
Generalimport on GitHub
pip install generalimport
Minimal example
from generalimport import generalimport
generalimport("notinstalled")
from notinstalled import missing_func # No error
missing_func() # Error occurs here
The readme on GitHub goes more in-depth
One way to handle the problem of different dependencies for different features is to implement the optional features as plugins. That way the user has control over which features are activated in the app but isn't responsible for tracking down the dependencies herself. That task then gets handled at the time of each plugin's installation.
Related
So I have been trying to get into visualizing proteins in python, so I went on youtube and found some tutorials I ended up on a tutorial that was teaching you how to visualize a protein from the COVID-19 virus, so I went and setup anaconda, got jupyter notebook working vscode, and downloaded the necessary files from the PDB database, and made sure they were in the same directory as my notebook but when I run the the nglview.show_biopython(structure) function I get an ValueError: I/O opertation on a closed file. I'm stummed this is my first time using jupyter notebook so maybe there is something I'm missing, I don't know.
This what the code looks like
from Bio.PDB import *
import nglview as nv
parser = PDBParser()
structure = parser.get_structure("6YYT", "6YYT.pdb")
view = nv.show_biopython(structure)
This is the error
Output exceeds the size limit. Open the full output data in a text editor
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_1728\2743687014.py in <module>
----> 1 view = nv.show_biopython(structure)
c:\Users\jerem\anaconda3\lib\site-packages\nglview\show.py in show_biopython(entity, **kwargs)
450 '''
451 entity = BiopythonStructure(entity)
--> 452 return NGLWidget(entity, **kwargs)
453
454
c:\Users\jerem\anaconda3\lib\site-packages\nglview\widget.py in __init__(self, structure, representations, parameters, **kwargs)
243 else:
244 if structure is not None:
--> 245 self.add_structure(structure, **kwargs)
246
247 if representations:
c:\Users\jerem\anaconda3\lib\site-packages\nglview\widget.py in add_structure(self, structure, **kwargs)
1111 if not isinstance(structure, Structure):
1112 raise ValueError(f'{structure} is not an instance of Structure')
-> 1113 self._load_data(structure, **kwargs)
1114 self._ngl_component_ids.append(structure.id)
1115 if self.n_components > 1:
...
--> 200 return io_str.getvalue()
201
202
ValueError: I/O operation on closed file
I only get this error when using nglview.show_biopython, when I run the get_structure() function it can read the file just fine. I can visualize other molucles just fine, or maybe that's because I was using the ASE library instead of a file. I don't know, that's why I'm here.
Update: Recently I found out that I can visualize the protein using nglview.show_file() instead of using nglview.show_biopython(). Even though I can visualize proteins now and techincally my problem has been solved I would still like to know why the show_biopython() function isn't working properly.
I also figured out another way to fix this problem. After going back to the tutorial I was talking about I saw that it was made back in 2021. After seeing this I wonder if we were using the same verions of each package, turns out we were not. I'm not sure what version of nglview they were using, but they were using biopython 1.79 which was the latest verion back in 2021 and I was using biopython 1.80. While using biopython 1.80 I was getting the error seen above. But now that I'm using biopython 1.79 I get this:
file = "6YYT.pdb"
parser = PDBParser()
structure = parser.get_structure("6YYT", file)
structure
view = nv.show_biopython(structure)
view
Output:
c:\Users\jerem\anaconda3\lib\site-packages\Bio\PDB\StructureBuilder.py:89:
PDBConstructionWarning: WARNING: Chain A is discontinuous at line 12059.
warnings.warn(
So I guess there is something going on with biopython 1.80, so I'm going to stick with 1.79
I had a similar problem with:
from Bio.PDB import *
import nglview as nv
parser = PDBParser(QUIET = True)
structure = parser.get_structure("2ms2", "2ms2.pdb")
save_pdb = PDBIO()
save_pdb.set_structure(structure)
save_pdb.save('pdb_out.pdb')
view = nv.show_biopython(structure)
view
error was like in question:
.................site-packages/nglview/adaptor.py:201, in BiopythonStructure.get_structure_string(self)
199 io_str = StringIO()
200 io_pdb.save(io_str)
--> 201 return io_str.getvalue()
ValueError: I/O operation on closed file
I modified site-packages/nglview/adaptor.py:201, in BiopythonStructure.get_structure_string(self):
def get_structure_string(self):
from Bio.PDB import PDBIO
from io import StringIO
io_pdb = PDBIO()
io_pdb.set_structure(self._entity)
io_str = StringIO()
io_pdb.save(io_str)
return io_str.getvalue()
with :
def get_structure_string(self):
from Bio.PDB import PDBIO
import mmap
io_pdb = PDBIO()
io_pdb.set_structure(self._entity)
mo = mmap_str()
io_pdb.save(mo)
return mo.read()
and added this new class mmap_str() , in same file:
import mmap
import copy
class mmap_str():
import mmap #added import at top
instance = None
def __init__(self):
self.mm = mmap.mmap(-1, 2)
self.a = ''
b = '\n'
self.mm.write(b.encode(encoding = 'utf-8'))
self.mm.seek(0)
#print('self.mm.read().decode() ',self.mm.read().decode(encoding = 'utf-8'))
self.mm.seek(0)
def __new__(cls, *args, **kwargs):
if not isinstance(cls.instance, cls):
cls.instance = object.__new__(cls)
return cls.instance
def write(self, string):
self.a = str(copy.deepcopy(self.mm.read().decode(encoding = 'utf-8'))).lstrip('\n')
self.mm.seek(0)
#print('a -> ', self.a)
len_a = len(self.a)
self.mm = mmap.mmap(-1, len(self.a)+len(string))
#print('a :', self.a)
#print('len self.mm ', len(self.mm))
#print('lenght string : ', len(string))
#print(bytes((self.a+string).encode()))
self.mm.write(bytes((self.a+string).encode()))
self.mm.seek(0)
#print('written once ')
#self.mm.seek(0)
def read(self):
self.mm.seek(0)
a = self.mm.read().decode().lstrip('\n')
self.mm.seek(0)
return a
def __enter__(self):
return self
def __exit__(self, *args):
pass
if I uncomment the print statements I'll get the :
IOPub data rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
error , but commenting them out I get:
while using thenglview.show_file(filename) I get:
tha's because, as could be seen looking at the pdb_out.pdb file
outputted by my code, Biopytho.PDB.PDBParser.get_structure(name , filename) doesnt retrieve the pdb header responsible for generate full CRYSTALLOGRAPHIC SYMMETRY/or biopython can't handle it (not sure about this, help if you know better), but just the coordinates.
Still don't understand what is going on with the :
--> 201 return io_str.getvalue()
ValueError: I/O operation on closed file
it could be something related to jupiter ipykernal ? hope somebody could shed more light into this, dont know how the framework runs, but is definitely different from a normal python interpreter. As an example:
Same code in one of my Python virtualenv, will run forever, so it could be ipykernel dont like StringIO()s or do something strange to them ?
OK thanks to the hint in the answer below, I went inspecting PDBIO.py in github repo for version Biopython 1.80 and compared the save method of PDBIO : def save(self, file, select=_select, write_end=True, preserve_atom_numbering=False): with the one in Biopython 1.79,
see first bit:
and last bit:
so apparently the big difference is the with fhandle: block in version 1.80.
So I realized that changing adaptor.py with adding a subclass of StringIO that looks like:
from io import StringIO
class StringIO(StringIO):
def __exit__(self, *args, **kwargs):
print('exiting from subclassed StringIO !!!!!')
pass
and modifying def get_structure_string(self): like this:
def get_structure_string(self):
from Bio.PDB import PDBIO
#from io import StringIO
io_pdb = PDBIO()
io_pdb.set_structure(self._entity)
io_str = StringIO()
io_pdb.save(io_str)
return io_str.getvalue()
was enough to get my Biopython 1.80 work in jupiter with nglview.
That told I am not sure what are the pitfalls of not closing the StringIO object we use for the visualization, but apparently its what Biopython 1.79 was doing like my first answer using a mmap object was doing too (not closing the mmap_str)
Another way to solve the probelm:
tried to understand git, I ended up with this, seems more coherent with the previous habits in the biopython project, but cant push it.
It makes use of as_handle from BIO.file :https://github.com/biopython/biopython/blob/e1902d1cdd3aa9325b4622b25d82fbf54633e251/Bio/File.py#L28
#contextlib.contextmanager
def as_handle(handleish, mode="r", **kwargs):
r"""Context manager to ensure we are using a handle.
Context manager for arguments that can be passed to SeqIO and AlignIO read, write,
and parse methods: either file objects or path-like objects (strings, pathlib.Path
instances, or more generally, anything that can be handled by the builtin 'open'
function).
When given a path-like object, returns an open file handle to that path, with provided
mode, which will be closed when the manager exits.
All other inputs are returned, and are *not* closed.
Arguments:
- handleish - Either a file handle or path-like object (anything which can be
passed to the builtin 'open' function, such as str, bytes,
pathlib.Path, and os.DirEntry objects)
- mode - Mode to open handleish (used only if handleish is a string)
- kwargs - Further arguments to pass to open(...)
Examples
--------
>>> from Bio import File
>>> import os
>>> with File.as_handle('seqs.fasta', 'w') as fp:
... fp.write('>test\nACGT')
...
10
>>> fp.closed
True
>>> handle = open('seqs.fasta', 'w')
>>> with File.as_handle(handle) as fp:
... fp.write('>test\nACGT')
...
10
>>> fp.closed
False
>>> fp.close()
>>> os.remove("seqs.fasta") # tidy up
"""
try:
with open(handleish, mode, **kwargs) as fp:
yield fp
except TypeError:
yield handleish
Anyone could pass it along ? [of course needs to be checked out, my tests are OK, but I am a novice].
As my MRE, I've got the following file:
blah.py
'''Blah module'''
import pydantic
class Foo:
'''Foo class'''
class Bar(pydantic.BaseModel):
'''Bar class'''
x: str = pydantic.Field(description='The x.')
#pydantic.validator('x')
def do_nothing(cls, value: str) -> str:
return value
I'm attempting to use Sphinx to generate documentation for this module. In my conf.py, I have
extensions = [
'sphinx.ext.autodoc',
'sphinxcontrib.autodoc_pydantic',
]
My blah.rst is
Blah
====
.. automodule:: blah.blah
:members:
I've pip installed pydantic and autodoc_pydantic.
However, when I make html, I get
Exception occurred:
File "/home/user/Projects/Workspace/env/lib/python3.10/site-packages/sphinxcontrib/autodoc_pydantic/inspection.py", line 311, in __init__
self.attribute: Dict = self.model.Config
AttributeError: type object 'Foo' has no attribute 'Config'
It appears that autodoc_pydantic thinks that Foo inherits from pydantic.BaseModel when it's really Bar that does. If I remove 'sphinxcontrib.autodoc_pydantic' from extensions, the error goes away.
More interestingly, if I delete the validator, the error goes away as well.
autodoc_pydantic is version 1.6.1.
This issue was fixed in version 1.7.2.
How can I deserialize a protocol buffer message, knowing only the string name of the protoc-generated class?
For some reason, the fully qualified name of the message that I get with DESCRIPTOR.full_name does not match the actual location of the python class, so I am unable to deserialize it with the following function:
def get_class( kls ):
"""Get class given a fully qualified name of a class"""
parts = kls.split('.')
module = ".".join(parts[:-1])
m = __import__( module )
for comp in parts[1:]:
m = getattr(m, comp)
return m
I just get ImportError no module (name).
Any help appreciated.
P.S.: In case it helps, the bigger problem I am trying to solve is to serialize the protobuf message to the database, and then deserialize it back out (I am using postgresql with sqlalchemy in this case). Since regular python pickle doesn't work - I am hoping to store a tuple (message_cls_name, message_binary) where message_cls_name is a fully qualified name of the protobuf message, and message_binary is a result of called SerializeToString on the message. The part I am not clear about is how to get the message out and deserialize it into proper protobuf class.
Here is an example solution..
from ocmg_common.protobuf.rpc_pb2 import Exception as PBException
from importlib import import_module
pb_message = PBException(message='hello world')
pb_message_string = pb_message.SerializeToString()
def deserialize(message, typ):
module_, class_ = typ.rsplit('.', 1)
class_ = getattr(import_module(module_), class_)
rv = class_()
rv.ParseFromString(message)
return rv
print(deserialize(pb_message_string, 'ocmg_common.protobuf.rpc_pb2.Exception'))
will output
(env)➜ httppost git:(master) ✗ python test.py
message: "hello world"
If you dont know the module, but only have the DESCRIPTOR.full_name you have to name them the same way, or create a function that maps a full descriptor name to a module. Otherwise you are stuck and dont know from where to import the module.
I hope it will help you.. ;)
Best of luck..
My scrpt has the following line:
libro_dia = xlrd.open_workbook(file_contents = libro_dia)
When libro_dia is not valid, it raises the following error:
XLRDError: Unsupported format, or corrupt file: Expected BOF record; found '<!DOCTYP'
I whant to handle this error, so I write:
try:
libro_dia = xlrd.open_workbook(file_contents = libro_dia)
except XLRDError:
no_termina = False
But it raises the following error:
NameError: name 'XLRDError' is not defined
What's going on?
You don't have XLRDError imported. I'm not familiar with xlrd, but something like:
from xlrd import XLRDError
might work. Alternatively, qualify your Error when handling it:
try:
libro_dia = xlrd.open_workbook(file_contents = libro_dia)
except xlrd.XLRDError: #<-- Qualified error here
no_termina = False
The above is assuming you have the following import:
import xlrd
In response to your comment:
There are several ways to use imports in python. If you import by using import xlrd, then you will have to qualify every object in that module as xlrd.SomeObject. An alternative way is by using the form from xlrd import *, which would allow you to reference the XLRD error without its' module namespace. This is lazy and a bad idea though, as it can lead to namespace clashes. If you would like to reference the error without qualifying it, the correct way to do it would be from xlrd import XLRDError, which would allow you to say except XLRDError. Read more about Python Modules
XLRDError is a custom exception and must be imported to your namespace just like any other object.
Edit: As Burhan Khalid has noted, you could just modify the except block to except xlrd.XLRDError if you already have import xlrd in your module.
Try using
xlrd.XLRDError as e and e.message should contain the error string
Sample:
try:
workbook = xlrd.open_workbook(sys.argv[1])
except xlrd.XLRDError as e:
print e.message
sys.exit(-1)
Calling a function of a module from a string with the function's name in Python shows us how to call a function by using getattr("bar")(), but this assumes that we have the module foo imported already.
How would would we then go about calling for the execution of "foo.bar" assuming that we probably also have to perform the import of foo (or from bar import foo)?
Use the __import__(....) function:
http://docs.python.org/library/functions.html#import
(David almost had it, but I think his example is more appropriate for what to do if you want to redefine the normal import process - to e.g. load from a zip file)
You can use find_module and load_module from the imp module to load a module whose name and/or location is determined at execution time.
The example at the end of the documentation topic explains how:
import imp
import sys
def __import__(name, globals=None, locals=None, fromlist=None):
# Fast path: see if the module has already been imported.
try:
return sys.modules[name]
except KeyError:
pass
# If any of the following calls raises an exception,
# there's a problem we can't handle -- let the caller handle it.
fp, pathname, description = imp.find_module(name)
try:
return imp.load_module(name, fp, pathname, description)
finally:
# Since we may exit via an exception, close fp explicitly.
if fp:
fp.close()
Here is what i finally came up with to get the function i wanted back out from a dottedname
from string import join
def dotsplit(dottedname):
module = join(dottedname.split('.')[:-1],'.')
function = dottedname.split('.')[-1]
return module, function
def load(dottedname):
mod, func = dotsplit(dottedname)
try:
mod = __import__(mod, globals(), locals(), [func,], -1)
return getattr(mod,func)
except (ImportError, AttributeError):
return dottedname
this solution is not working?
Calling a function of a module from a string with the function's name in Python