I am writing a wrapper to a C DLL from Python that I wrote as a singleton.
Most of the functions in this library return a negative value in case of error. When an error occurs, I would like to get the error description though the get_error_description function.
This works well but I would like my wrapper to be a bit less boilerplate. For each function I have to rewrite the same thing:
Log the debug message
Check the exit status
Is there a way to make this implementation better without using eval
class Singleton(type):
_instances = {}
def __call__(cls, *args, **kwargs):
if cls not in cls._instances:
cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs)
return cls._instances[cls]
class MyLibError(Exception):
def __init__(self, errno):
lib = MyLib()
super(MyLibError, self).__init__(lib.get_error_description(errno))
class MyLib():
"""Python wrapper to `lib.dll`"""
__metaclass__ = Singleton
def __init__(self):
self._lib = cdll.LoadLibrary('lib.dll')
def _configure_logger(self):
self.logger = logging.getLogger("MyLib")
self.logger.setLevel(logging.DEBUG)
ch = logging.StreamHandler(sys.stdout)
ch.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
self.logger.addHandler(ch)
def foo(self, a):
return_code = self._lib.foo(a)
self.logger.debug("foo(%d) returned %d" % (a, return_code))
if return_code < 0: raise MyLibError (return_code)
The final goal is to auto-generate this wrapper from the library codebase where I will be able to export the docstrings properly.
You may want to look at the functools module for some built-in functions that can help.
In Python, functions are first-class objects and so can be passed around and acted on like other objects. One step here would be to put the common code into a function. Something like
def _call_my_lib(self, f, args):
return_code = f(*args)
self.logger.debug("%s(%d) returned %d" % (f.__name__, args, return_code))
if return_code < 0: raise MyLibError (return_code)
def foo(self, *args):
self._call_my_lib(self._lib.foo, args)
The functools.partial function can take care of "partially" calling a function at one time, then finish the arguments at a later time. It might be a little tricky to combine bound methods with functools, but it might be nice and compact.
foo = functools.partial(self._call_my_lib, self._lib.foo)
A decorator may be useful here, especially if you need to to any other processing in Python before calling into your library.
def mylib(f):
def _call_my_lib(self, *args):
return_code = f(*args)
self.logger.debug("%s(%d) returned %d" % (f.__name__, args, return_code))
if return_code < 0: raise MyLibError (return_code)
return _call_my_lib
class MyLib:
#mylib
def foo(self, a):
return self._lib.foo(a)
Note that I haven't tested any of this code, so it may contain typos or some oversights but should provide some ideas for ways to implement cross-cutting concerns in Python.
Related
Issue description
Consider the following scenario:
I have a class "Speaker" which is able to deliver speech.
Some methods, e.g. "speak", effectively deliver the speech.
Before delivering the speech, we need to turn on the mic, and after the speech, we need to turn off.
To avoid doing this in every method which delivers the speech, I created a decorator to turn the mic on and off.
It is an imaginary example. My real code is about connect to and disconnect from database.
Code:
def use_mic(method):
def wrapper(self, *args, **kwargs): # [3]
self.turn_on_mic() # [2] Method "turn_on_mic" is called, it is not a method of "str"
method(self, *args, **kwargs)
self.turn_off_mic() # [2]
return wrapper
class Speaker:
def __init__(self, name):
self.name = name
self.mic_on = False
#use_mic
def speak(self, message):
print("[%s speaks] %s" % (self.name, message))
def turn_on_mic(self):
print("Turn on microphone")
self.mic_on = True
def turn_off_mic(self):
print("Turn off microphone")
self.mic_on = False
def greet(self):
self.speak('Good morning my neighbours') # [1] Lint warning here, because [2]
john = Speaker('John')
john.greet()
The code runs normally, but PyCharm reports the following lint warning:
PyCharm apparently regards the str parameter passed at [1] as the 1st parameter "self" at [3], and warns "no such attribute" based on [2]. But "Python passes the instance as the 1st parameter of a method when it is called with <object>.<method> or self.<method>" is a common sense.
My question
Why does PyCharm misunderdand the code?
Is it a bug of PyCharm's default lint tool, or did I write non-pythonic code?
What is the pythonic way of writing such code?
When I change [3] to the following code, the warning disappeared, but is it the pythonic way?
code:
def wrapper(*args, **kwargs): # [2]
self = args[0]
I'm using PyCharm 2021.1.3 (Community Edition) and Python 3.7.5
Honestly, I'd use a context manager for this sort of thing. They're easy to implement with the contextmanager decorator (which is well-supported by PyCharm too).
from contextlib import contextmanager
class Speaker:
def __init__(self, name):
self.name = name
self.mic_on = False
#contextmanager
def using_mic(self):
self.turn_on_mic()
try:
yield
finally:
self.turn_off_mic()
def speak(self, message):
with self.using_mic():
print("[%s speaks] %s" % (self.name, message))
def turn_on_mic(self):
print("Turn on microphone")
self.mic_on = True
def turn_off_mic(self):
print("Turn off microphone")
self.mic_on = False
def greet(self):
self.speak('Good morning my neighbours')
john = Speaker('John')
john.greet()
Ive been on a tear of writing some decorators recently.
One of the ones I just wrote allows you to put the decorator just before a class definition, and it will cause every method of the class to print some logigng info when its run (more for debugging/initial super basic speed tests during a build)
def class_logit(cls):
class NCls(object):
def __init__(self, *args, **kwargs):
self.instance = cls(*args, **kwargs)
#staticmethod
def _class_logit(original_function):
def arg_catch(*args, **kwargs):
start = time.time()
result = original_function(*args, **kwargs)
print('Called: {0} | From: {1} | Args: {2} | Kwargs: {3} | Run Time: {4}'
''.format(original_function.__name__, str(inspect.getmodule(original_function)),
args, kwargs, time.time() - start))
return result
return arg_catch
def __getattribute__(self, s):
try:
x = super(NCls, self).__getattribute__(s)
except AttributeError:
pass
else:
return x
x = self.instance.__getattribute__(s)
if type(x) == type(self.__init__):
return self._class_logit(x)
else:
return x
return NCls
This works great when applied to a very basic class i create.
Where I start to encounter issues is when I apply it to a class that is inheriting another - for instance, using QT:
#scld.class_logit
class TestWindow(QtGui.QDialog):
def __init__(self):
print self
super(TestWindow, self).__init__()
a = TestWindow()
Im getting the following error... and im not entirely sure what to do about it!
self.instance = cls(*args, **kwargs)
File "<string>", line 15, in __init__
TypeError: super(type, obj): obj must be an instance or subtype of type
Any help would be appreciated!
(Apologies in advance, no matter WHAT i do SO is breaking the formatting on my first bit of code... Im even manually spending 10 minutes adding spaces but its coming out incorrectly... sorry!)
You are being a bit too intrusive with your decorator.
While if you want to profile methods defined on the Qt framework itself, a somewhat aggressive approach is needed, your decorator replaces the entire class by a proxy.
Qt bindings are somewhat complicated indeed, and it is hard to tell why it is erroring when being instantiated in this case.
So - first things first - if your intent would be to apply the decorator to a class hierarchy defined by yourself, or at least one defined in pure Python, a good approach there could be using metaclasses: with a metaclass you could decorate each method when a class is created, and do not mess anymore at runtime, when methods are retrieved from each class.
but Qt, as some other libraries, have its methods and classes defined in native code, and that will prevent you from wrapping existing methods in a new class. So, wrapping the methods on attribute retrieval on __getattribute__ could work.
Here is a simpler approach that instead of using a Proxy, just plug-in a foreign __getattribute__ that does the wrap-with-logger thing you want.
Your mileage may vary with it. Specially, it won't be triggered if one method of the class is called by other method in native code - as this won't go through Python's attribute retrieval mechanism (instead, it will use C++ method retrieval directly).
from PyQt5 import QtWidgets, QtGui
def log_dec(func):
def wraper(*args, **kwargs):
print(func.__name__, args, kwargs)
return func(*args, **kwargs)
return wraper
def decorate(cls):
def __getattribute__(self, attr):
attr = super(cls, self).__getattribute__(attr)
if callable(attr):
return log_dec(attr)
return attr
cls.__getattribute__ = __getattribute__
return cls
#decorate
class Example(QtGui.QWindow):
pass
app = QtWidgets.QApplication([])
w = Example()
w.show()
(Of course, just replace the basic logger by your fancy logger above)
I am trying one project, that has many functions. I am using standard logging module The requirement is to log DEBUG logs which says:
<timestamp> DEBUG entered foo()
<timestamp> DEBUG exited foo()
<timestamp> DEBUG entered bar()
<timestamp> DEBUG exited bar()
But I don't want to write the DEBUG logs inside every function. Is there a way in Python which takes care of automatic log containing entry and exit of functions?
I don't want to use any decorator to all functions, unless it is the only solution in Python.
Any reason you don't want to use a decorator? It's pretty simple:
from functools import wraps
import logging
logging.basicConfig(filename='some_logfile.log', level=logging.DEBUG)
def tracelog(func):
#wraps(func) # to preserve docstring
def inner(*args, **kwargs):
logging.debug('entered {0}, called with args={1}, kwargs={2}'.format(func.func_name, *args, **kwargs))
func(*args, **kwargs)
logging.debug('exited {0}'.format(func.func_name))
return inner
If you get that, then passing in an independent logger is just another layer deep:
def tracelog(log):
def real_decorator(func):
#wraps(func)
def inner(*args, **kwargs):
log.debug('entered {0} called with args={1}, kwargs={2}'.format(func.func_name, *args, **kwargs))
func(*args, **kwargs)
log.debug('exited {0}'.format(func.func_name))
return inner
return real_decorator
Cool thing, is that this works for functions and methods
Usage example:
#tracelog(logger)
def somefunc():
print('running somefunc')
You want to have a look at sys.settrace.
There is a nice explanation with code examples for call tracing here: https://pymotw.com/2/sys/tracing.html
A very primitive way to do it, look at the link for more worked examples:
import sys
def trace_calls(frame, event, arg):
if event not in ('call', 'return'):
return
co = frame.f_code
func_name = co.co_name
if func_name == 'write':
# Ignore write() calls from print statements
return
if event == 'call':
print "ENTER: %s" % func_name
else:
print "EXIT: %s" % func_name
sys.settrace(trace_calls)
I'm trying to write a function decorator that wraps functions. If the function is called with dry=True, the function should simply print its name and arguments. If it's called with dry=False, or without dry as an argument, it should run normally.
I have a toy working with:
from functools import wraps
def drywrap(func):
#wraps(func)
def dryfunc(*args, **kwargs):
dry = kwargs.get('dry')
if dry is not None and dry:
print("Dry run {} with args {} {}".format(func.func_name, args, kwargs))
else:
if dry is not None:
del kwargs['dry']
return func(*args, **kwargs)
return dryfunc
#drywrap
def a(something):
# import pdb; pdb.set_trace()
print("a")
print(something)
a('print no dry')
a('print me', dry=False)
a('print me too', dry=True)
...but when I put this into my application, the print statement is never run even when dry=True in the arguments. I've also tried:
from functools import wraps
def drywrap(func):
#wraps(func)
def dryfunc(*args, **kwargs):
def printfunc(*args, **kwargs):
print("Dry run {} with args {} {}".format(func.func_name,args,kwargs))
dry = kwargs.get('dry')
if dry is not None and dry:
return printfunc(*args, **kwargs)
else:
if dry is not None:
del kwargs['dry']
return func(*args, **kwargs)
return dryfunc
I should note that as part of my applications, the drywrap function is part of utils file that is imported before being used...
src/build/utils.py <--- has drywrap
src/build/build.py
src/build/plugins/subclass_build.py <--- use drywrap via "from build.utils import drywrap"
The following github project provides "a useful dry-run decorator for python" that allows checking "the basic logic of our programs without running certain operations that are lengthy and cause side effects."
see https://github.com/haarcuba/dryable
This feels like a solution.
def drier(dry=False):
def wrapper(func):
def inner_wrapper(*arg,**kwargs):
if dry:
print(func.__name__)
print(arg)
print(kwargs)
else:
return func(*arg,**kwargs)
return inner_wrapper
return wrapper
#drier(True)
def test(name):
print("hello "+name)
test("girish")
#drier()
def test2(name,last):
print("hello {} {}".format(name,last))
But as you can see even when you dont have any arguments you have to give #drier() instead of #drier .
Note : Using python 3.4.1
I have a general purpose function that sends info about exceptions to an application log.
I use the exception_handler function from within methods in classes. The app log handler that is passed into and called by the exception_handler creates a JSON string that is what actually gets sent to the logfile. This all works fine.
def exception_handler(log, terminate=False):
exc_type, exc_value, exc_tb = sys.exc_info()
filename, line_num, func_name, text = traceback.extract_tb(exc_tb)[-1]
log.error('{0} Thrown from module: {1} in {2} at line: {3} ({4})'.format(exc_value, filename, func_name, line_num, text))
del (filename, line_num, func_name, text)
if terminate:
sys.exit()
I use it as follows: (a hyper-simplified example)
from utils import exception_handler
class Demo1(object):
def __init__(self):
self.log = {a class that implements the application log}
def demo(self, name):
try:
print(name)
except Exception:
exception_handler(self.log, True)
I would like to alter exception_handler for use as a decorator for a large number of methods, i.e.:
#handle_exceptions
def func1(self, name)
{some code that gets wrapped in a try / except by the decorator}
I've looked at a number of articles about decorators, but I haven't yet figured out how to implement what I want to do. I need to pass a reference to the active log object and also pass 0 or more arguments to the wrapped function. I'd be happy to convert exception_handler to a method in a class if that makes things easier.
Such a decorator would simply be:
def handle_exceptions(f):
def wrapper(*args, **kw):
try:
return f(*args, **kw)
except Exception:
self = args[0]
exception_handler(self.log, True)
return wrapper
This decorator simply calls the wrapped function inside a try suite.
This can be applied to methods only, as it assumes the first argument is self.
Thanks to Martijn for pointing me in the right direction.
I couldn't get his suggested solution to work but after a little more searching based on his example the following works fine:
def handle_exceptions(fn):
from functools import wraps
#wraps(fn)
def wrapper(self, *args, **kw):
try:
return fn(self, *args, **kw)
except Exception:
exception_handler(self.log)
return wrapper