How do I define custom exceptions across multiple files? - python

So I have a Python project that is split across multiple files (to preserve my sanity) and I'm trying to use a custom exception defined in one file in a different file. Basically, I have a.py and b.py. a.py contains a custom exception (customException) that is raised by a function within a.py.
a.py:
class customException(Exception):
[irrelevant error handling]
def aFunction():
if [something]:
raise customException
return [stuff]
The surrounding code is irrelevant (I hope, otherwise I have a much weirder issue) so I didn't include it. I then import a.py into b.py (they're in the same directory) to use aFunction:
b.py:
import a.py
try:
var = a.aFunction()
except customException:
var = [something else]
When I run b.py in a situation where a.py would raise customException, I get the expected a.customException error, but I also get a NameError: name 'customException' is not defined error.
How do I make it so that customException is defined in b.py?
Edit: Paul M.'s solution worked for me just in case anyone else finds this when looking for a solution. Thanks again Paul and everyone else who answered!

You just need to qualify customException by calling it as a.customException from modules where you imported a, like so:
a.py:
class customException(Exception):
[irrelevant error handling]
def aFunction():
if [something]:
raise customException
return [stuff]
b.py:
import a
try:
var = a.aFunction()
except a.customException:
var = [something else]

Related

Unusual import of a class in Python

There is a file exceptions.py present in kubernetes.client folder where ApiException class is defined. So I can write the following line in my own file say myfile.py and use the ApiException for raising exception.
some_folder.myfile.py code snippet:
from kubernetes.client.exceptions import ApiException
.....
.....
try:
.....
except ApiException as e:
.....
That is fine.
Also in rest.py present in kubernetes.client folder is importing the same class ApiException and raising some exception.
kubernetes.client.rest.py code snippet:
from kubernetes.client.exceptions import ApiException
.....
.....
if not 200 <= r.status <= 299:
raise ApiException(http_resp=r)
That is also fine. But I am pretty much confused to see the below things as ApiException is imported from kubernetes.client.rest in some_file.py file (see below), not from kubernetes.client.exceptions where actual class definition for ApiException is present.
some_folder.some_file.py code snippet:
from kubernetes.client.rest import ApiException
.....
.....
try:
.....
except ApiException as e:
.....
The above code is working but I am really surprised. Can somebody explain me what is happening here. Sorry I am new to Python.
Note:
ApiException class is not defined in kubernetes.client.rest, it is only defined in kubernetes.client.exceptions
I have searched many articles at online but did not get much information.
The name ApiException is also defined in kubernetes.client.rest, because it's been imported there. kubernetes.client.rest is using it, so it exists there. Any name that exists at the top level of a module is an attribute of that module and can be imported from elsewhere. It doesn't matter how that name got to be defined there.
Arguably the class should be imported from its canonical location where it has been defined, but it doesn't have to be. some_folder.some_file.py might not know where the exception has been originally defined, if it only interacts with kubernetes.client.rest and just needs to catch exceptions raised there.
You will often see this technique used in __init__.py files to simply re-export some classes defined in submodules under a simpler name, e.g.:
# foo/__init__.py
from .submodule import Foo
from .othermodule import Bar
This allows users of foo to from foo import Foo, instead of having to do from foo.submodule import Foo, but it still keeps the implementation of foo clean and separated into multiple files.

Python : Mock a module that raises an exception

I need to test a function in a module that import another module which raises an exception when imported.
#a.py
raise ValueError("hello")
my_const = 'SOMETHING'
#b.py
from a import my_const
def foo():
# do something with my_const
return "expected_result"
#test_foo.py
def test_foo():
from b import foo
assert foo() == "expected_result"
Here when I import foo in test_foo.py, a.py get imported in b.py, an exception is raised and the import is never completed so my_const is not available in b.py.
I'm not allowed to modify neither a.py or b.py. Also, using unittest.patch and #patch('a', 'my_const') does import a.py so it doens't work.
It is possible create the module dynamically with the import lib and add it to sys.modules, but is there another solution that doesn't require importlib ?
As far as I know, you can create and importe the module dynamically. Here is a code inspired from the
"Approximating importlib.import_module()" section in the import lib documentation
from importlib.util import module_from_spec, find_spec
import sys
def patched_import(name, **kwargs):
    spec = find_spec(name)
    m = module_from_spec(spec)
    for k in kwargs:
     setattr(m, k, kwargs[k])
    sys.modules[name] = m
Edit: My solution should be ok for a mock-up but be careful as manipulation of referential can have side effects.
To use it, just do:
patched_import('a', my_const='stuff')
Before importing b.py.

Best practices for importing rarely used package in Python

My Python package depends on an external library for a few of it's functions. This is a non-Python package and can be difficult to install, so I'd like users to still be able to use my package but have it fail when using any functions that depend on this non-Python package.
What is the standard practice for this? I could only import the non-Python package inside the methods that use it, but I really hate doing this
My current setup:
myInterface.py
myPackage/
--classA.py
--classB.py
The interfaces script myInterface.py imports classA and classB and classB imports the non-Python package. If the import fails I print a warning. If myMethod is called and the package isn't installed there will be some error downstream but I do not catch it anywhere, nor do I warn the user.
classB is imported every time the interface script is called so I can't have anything fail there, which is why I included the pass. Like I said above, I could import inside the method and have it fail there, but I really like keeping all of my imports in one place.
From classB.py
try:
import someWeirdPackage
except ImportError:
print("Cannot import someWeirdPackage")
pass
class ClassB():
...
def myMethod():
swp = someWeirdPackage()
...
If you are only importing one external library, I would go for something along these lines:
try:
import weirdModule
available = True
except ImportError:
available = False
def func_requiring_weirdmodule():
if not available:
raise ImportError('weirdModule not available')
...
The conditional and error checking is only needed if you want to give more descriptive errors. If not you can omit it and let python throw the corresponding error when trying to calling a non-imported module, as you do in your current setup.
If multiple functions do use weirdModule, you can wrap the checking into a function:
def require_weird_module():
if not available:
raise ImportError('weirdModule not available')
def f1():
require_weird_module()
...
def f2():
require_weird_module()
...
On the other hand, if you have multiple libraries to be imported by different functions, you can load them dynamically. Although it doesn't look pretty, python caches them and there is nothing wrong with it. I would use importlib
import importlib
def func_requiring_weirdmodule():
weirdModule = importlib.import_module('weirdModule')
Again, if multiple of your functions import complicated external modules you can wrap them into:
def import_external(name):
return importlib.import_module(name)
def f1():
weird1 = import_external('weirdModule1')
def f2():
weird2 = import_external('weirdModule2')
And last, you could create a handler to prevent importing the same module twice, something along the lines of:
class Importer(object):
__loaded__ = {}
#staticmethod
def import_external(name):
if name in Importer.__loaded__:
return Importer.__loaded__[name]
mod = importlib.import_module(name)
Importer.__loaded__[name] = mod
return mod
def f1():
weird = Importer.import_external('weird1')
def f2():
weird = Importer.import_external('weird1')
Although I'm pretty sure that importlib does caching behing the scenes and you don't really need for manual caching.
In short, although it does look ugly, there is nothing wrong with importing modules dynamically in python. In fact, a lot of libraries rely on this. On the other hand, if it is just for an special case of 3 methods accessing 1 external function, do use your approach or my first one in case you cant to add custom sception handling.
I'm not really sure that there's any best practice in this situation, but I would redefine the function if it's not supported:
def warn_import():
print("Cannot import someWeirdPackage")
try:
import someWeirdPackage
external_func = someWeirdPackage
except ImportError:
external_func = warn_import
class ClassB():
def myMethod(self):
swp = external_func()
b = ClassB()
b.myMethod()
You can create two separate classes for the two cases. The first will be used when the the package exist . The second will used when the package does not exist.
class ClassB1():
def myMethod(self):
print("someWeirdPackage exist")
# do something
class ClassB2(ClassB1):
def myMethod(self):
print("someWeirdPackage does not exist")
# do something or raise Exception
try:
import someWeirdPackage
class ClassB(ClassB1):
pass
except ImportError:
class ClassB(ClassB2):
pass
You can also use given below approach to overcome the problem that you're facing.
class UnAvailableName(object):
def __init__(self, name):
self.target = name
def __getattr_(self, attr):
raise ImportError("{} is not available.".format(attr))
try:
import someWeirdPackage
except ImportError:
print("Cannot import someWeirdPackage")
someWeirdPackage = someWeirdPackage("someWeirdPackage")
class ClassB():
def myMethod():
swp = someWeirdPackage.hello()
a = ClassB()
a.myMethod()

How does name resolution work when classes are derived across modules?

Classes B and C both derive from base class A, and neither override A's method test(). B is defined in the same module as A; C is defined in a separate module. How is it that calling B.test() prints "hello", but calling C.test() fails? Shouldn't either invocation end up executing A.test() and therefore be able to resolve the symbol "message" in mod1's namespace?
I'd also gratefully receive hints on where this behaviour is documented as I've been unable to turn up anything. How are names resolved when C.test() is called, and can "message" be injected into one of the namespaces somehow?
FWIW, the reason I haven't used an instance variable (e.g. set A.message = "hello") is because I'm wanting to access a "global" singleton object and don't want to have an explicit referent to it in every other object.
mod1.py:
import mod2
class A(object):
def test(self):
print message
class B(A):
pass
if __name__ == "__main__":
message = "hello"
A().test()
B().test()
mod2.C().test()
mod2.py:
import mod1
class C(mod1.A):
pass
output is:
$ python mod1.py
hello
hello
Traceback (most recent call last):
File "mod1.py", line 14, in <module>
mod2.C().test()
File "mod1.py", line 5, in test
print message
NameError: global name 'message' is not defined
Many thanks!
EOL is correct, moving the "main" part of the program into a new file mod3.py does indeed make things work.
http://bytebaker.com/2008/07/30/python-namespaces/ further clarifies the issue.
In my original question, it turns out that the variable message ist stored in the __main__ module namespace because mod1.py is being run as a script. mod2 imports mod1, but it gets a separate mod1 namespace, where the variable message does not exist. The following code snippet demonstrates more clearly as it writes message into mod1's namespace (not that I'd recommend this be done in real life), causing the expected behaviour.
import sys
class A(object):
def test(self):
print message
class B(A):
pass
if __name__ == "__main__":
import mod2
message = "hello"
sys.modules["mod1"].message = message
A().test()
B().test()
mod2.C().test()
I think the best real-world fix is to move the "main" part of the program into a separate module, as EOL implies, or do:
class A(object):
def test(self):
print message
class B(A):
pass
def main():
global message
message = "hello"
A().test()
B().test()
# resolve circular import by importing in local scope
import mod2
mod2.C().test()
if __name__ == "__main__":
# break into mod1 namespace from __main__ namespace
import mod1
mod1.main()
Could you use a class attribute instead of a global? The following works
import mod2
class A(object):
message = "Hello" # Class attribute (not duplicated in instances)
def test(self):
print self.message # Class A attribute can be overridden by subclasses
class B(A):
pass
if __name__ == "__main__":
A().test()
B().test()
mod2.C().test()
Not using globals is cleaner: in the code above, message is explicitly attached to the class it is used in.
That said, I am also very curious as to why the global message is not found by mod2.C().test().
Things work as expected, though, if the cross-importing is removed (no main program in mod1.py, and no import mod2): importing mod1 and mod2 from mod3.py, doing mod1.message = "Hello" there and mod2.C().test() works. I am therefore wondering if the problem is not related to cross-importing…

how do I make a dynamically imported module available in another module or file?

I have 3 files a.py, b.py, c.py
I am trying to dynamically import a class called "C" defined in c.py from within a.py
and have the evaluated name available in b.py
python a.py is currently catching the NameError. I'm trying to avoid this and create an
instance in b.py which calls C.do_int(10)
a.py
import b
#older
#services = __import__('services')
#interface = eval('services.MyRestInterface')
# python2.7
import importlib
module = importlib.import_module('c')
interface = eval('module.C')
# will work
i = interface()
print i.do_int(10)
# interface isn't defined in b.py after call to eval
try:
print b.call_eval('interface')
except NameError:
print "b.call_eval('interface'): interface is not defined in b.py"
b.py
def call_eval(name):
interface = eval(name)
i = interface()
return i.do_int(10)
c.py
class C(object):
my_int = 32
def do_int(self, number):
self.my_int += number
return self.my_int
How can I achieve this?
interface only exists in a's namespace. You can put a reference to the interface into b's namespace like this
b.interface = interface
try:
print b.call_eval('interface')
except NameError:
print "b.call_eval('interface'): interface is not defined in b.py"
I'm not sure why you're not just passing the interface to call_eval though
I'm sure there should be a better solution by totally avoiding this.
But this could do the trick:
a.py:
shared_variables = {}
import b
import c
shared_variables['C'] = c.C
b.do_something_with('C')
b.py:
from __main__ import shared_variables
def do_something_with(name):
print(shared_variables[name])
If a.py already loads the class, I fail to see the reason to pass it by name. Instead, do
# b.py
def call_eval(klass):
j = klass()
return i.do_int(10)
and, in a.py, do
import importlib
module = importlib.import_module('c')
interface = getattr(module, 'C')
b.call_eval(interface)

Categories