I am trying one project, that has many functions. I am using standard logging module The requirement is to log DEBUG logs which says:
<timestamp> DEBUG entered foo()
<timestamp> DEBUG exited foo()
<timestamp> DEBUG entered bar()
<timestamp> DEBUG exited bar()
But I don't want to write the DEBUG logs inside every function. Is there a way in Python which takes care of automatic log containing entry and exit of functions?
I don't want to use any decorator to all functions, unless it is the only solution in Python.
Any reason you don't want to use a decorator? It's pretty simple:
from functools import wraps
import logging
logging.basicConfig(filename='some_logfile.log', level=logging.DEBUG)
def tracelog(func):
#wraps(func) # to preserve docstring
def inner(*args, **kwargs):
logging.debug('entered {0}, called with args={1}, kwargs={2}'.format(func.func_name, *args, **kwargs))
func(*args, **kwargs)
logging.debug('exited {0}'.format(func.func_name))
return inner
If you get that, then passing in an independent logger is just another layer deep:
def tracelog(log):
def real_decorator(func):
#wraps(func)
def inner(*args, **kwargs):
log.debug('entered {0} called with args={1}, kwargs={2}'.format(func.func_name, *args, **kwargs))
func(*args, **kwargs)
log.debug('exited {0}'.format(func.func_name))
return inner
return real_decorator
Cool thing, is that this works for functions and methods
Usage example:
#tracelog(logger)
def somefunc():
print('running somefunc')
You want to have a look at sys.settrace.
There is a nice explanation with code examples for call tracing here: https://pymotw.com/2/sys/tracing.html
A very primitive way to do it, look at the link for more worked examples:
import sys
def trace_calls(frame, event, arg):
if event not in ('call', 'return'):
return
co = frame.f_code
func_name = co.co_name
if func_name == 'write':
# Ignore write() calls from print statements
return
if event == 'call':
print "ENTER: %s" % func_name
else:
print "EXIT: %s" % func_name
sys.settrace(trace_calls)
Related
Context:
I'm writing a personal python module to simplify some scripts I have lying around. One of the functions I have is untested and may have undesirable edge cases that I still have to consider. In order to not allow myself from relying on it from other modules or functions, I was wondering whether I could enforce it to raise an error if not called directly from the REPL.
I'm not asking whether this is a good idea or not. It obviously isn't because it defeats the purpose of writing a function in the first place. I'm wondering if is is possible in Python, and how to do it.
Question:
Is it possible to have a function raise an error if not called interactively? For example:
def is_called_from_top_level():
"How to implement this?"
pass
def shady_func():
"Only for testing at the REPL. Calling from elsewhere will raise."
if not is_called_from_top_level():
raise NotImplementedError("Shady function can only be called directly.")
return True
def other_func():
"Has an indirect call to shady."
return shady_func()
And then at a REPL:
[In:1] shady_func()
[Out:1] True
[In:2] other_func()
[Out:2] NotImplementedError: "Shady function can only be called directly."
Try checking for ps1 on sys.
import sys
def dangerous_util_func(a, b):
is_interactive = bool(getattr(sys, 'ps1', False))
print(is_interactive) # Prints True or False
return a + b
You can even get fancy and make a decorator for this to make it more reusable.
import sys
from functools import wraps
def repl_only(func):
#wraps(func)
def wrapped(*args, **kwargs):
is_interactive = bool(getattr(sys, 'ps1', False))
if not is_interactive:
raise NotImplementedError("Can only be called from REPL")
return func(*args, **kwargs)
return wrapped
#repl_only
def dangerous_util_func(a, b):
return a + b
DISCLAIMER: This is a bit of a hack, and may not work across different Python / IPython / Jupyter versions, but the underlying idea still holds, i.e. use inspect to get an idea of who is calling.
The code below was tested with Python 3.7.3, IPython 7.6.1 and Jupyter Notebook Server 5.7.8.
Using inspect (obviously), one can look for distinctive features of the REPL frame:
inside a Jupyter Notebook you can check if the repr() of the previous frame contain the string 'code <module>';
using Python / IPython you can check for the code representation of the previous frame to start at line 1.
In code, this would look like:
import inspect
def is_called_from_top_level():
"How to implement this?"
pass
def shady_func():
"Only for testing at the REPL. Calling from elsewhere will raise."
frame = inspect.currentframe()
is_interactive = (
'code <module>' in repr(frame.f_back) # Jupyter
or 'line 1>' in repr(frame.f_back.f_code)) # Python / IPython
if not is_interactive:
raise NotImplementedError("Shady function can only be called directly.")
return True
def other_func():
"Has an indirect call to shady."
return shady_func()
shady_func()
# True
other_func()
# raises NotImplementedError
(EDITED to include support for both Jupyter Notebook and Python / IPython).
As suggested by #bananafish, this is actually a good use case for a decorator:
import inspect
import functools
def repl_only(func):
#functools.wraps(func)
def wrapped(*args, **kwargs):
frame = inspect.currentframe()
is_interactive = (
'code <module>' in repr(frame.f_back) # Jupyter
or 'line 1>' in repr(frame.f_back.f_code)) # Python / IPython
if not is_interactive:
raise NotImplementedError('Can only be called from REPL')
return func(*args, **kwargs)
return wrapped
#repl_only
def foo():
return True
def bar():
return foo()
print(foo())
# True
print(bar())
# raises NotImplementedError
You can do something like that:
import inspect
def other():
shady()
def shady():
curfrm = inspect.currentframe()
calframe = inspect.getouterframes(curfrm, 2)
caller = calframe[1][3]
if not '<module>' in caller::
raise Exception("Not an acceptable caller")
print("that's fine")
if __name__ == '__main__':
import sys
args = sys.argv[1:]
shady()
other()
The module inspect allows you to get information such as the function's caller. You may have to dig a bit deeper if you have edge cases....
Inspired by the comment to the OP suggesting looking at the stack trace, #norok2 's solution based on direct caller inspection, and by #bananafish 's use of the decorator, I came up with an alternative solution that does not require inspect nor sys.
The idea is to throw and catch to get a handle on a traceback object (essentially our stack trace), and then do the direct caller inspection.
from functools import wraps
def repl_only(func):
#wraps(func)
def wrapped(*args, **kwargs):
try:
raise Exception
except Exception as e:
if "module" not in str(e.__traceback__.tb_frame.f_back)[-10:]:
raise NotImplementedError(f"{func.__name__} has to be called from the REPL!")
return func(*args, **kwargs)
return wrapped
#repl_only
def dangerous_util_func(a, b):
return a + b
def foo():
return dangerous_util_func(1, 2)
Here dangerous_util_func will run and foo will throw.
I am trying to get correct function name while using 2 decorators.
1> profile - from memory_profiler import profile
2> custom timing decorator
def timing(f):
#wraps(f)
def wrapper(*args, **kwargs):
start = time()
result = f(*args, **kwargs)
end = time()
print 'Elapsed time: {} - {}'.format(wrapper.__name__, end - start)
return result
return wrapper
They are used in the following order as defined below
#timing
#profile
def my_function():
something.....
Problem is both functions work well individually but when used together I don't get correct name via the timing decorator. I always get wrapper instead of the actual function name.
How do I get the actual function name instead of getting "wrapper" as the function name?
Dealt with the same problem today. In my case, I have a few decorators that extend other decorators by composition, and also take input arguments themselves. I needed to log some contextual information about the function being decorated, so I was naturally inclined to use the func.__name__ attribute, and I was getting the "wrapper" string like you.
My solution might not be the most elegant one (by far) but it's legal and it worked 🤷
def log_context(logger):
def decorator(func):
def wrapper(*args, **kwargs):
logger.debug({
func.__name__: {
"args": args,
"kwargs": kwargs,
},
})
# Persist the name of the original function
# throughout the stack
wrapper.__name__ = func.__name__
return wrapper
return decorator
Then I can have something like this without a problem:
#log_context(logger)
#a_similar_decorator(logger)
def foo(x, y=None):
pass
foo(1, y=123)
This would output something like this:
{'foo': {'args': (1,), 'kwargs': {'y': 123}}}
I am writing a wrapper to a C DLL from Python that I wrote as a singleton.
Most of the functions in this library return a negative value in case of error. When an error occurs, I would like to get the error description though the get_error_description function.
This works well but I would like my wrapper to be a bit less boilerplate. For each function I have to rewrite the same thing:
Log the debug message
Check the exit status
Is there a way to make this implementation better without using eval
class Singleton(type):
_instances = {}
def __call__(cls, *args, **kwargs):
if cls not in cls._instances:
cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs)
return cls._instances[cls]
class MyLibError(Exception):
def __init__(self, errno):
lib = MyLib()
super(MyLibError, self).__init__(lib.get_error_description(errno))
class MyLib():
"""Python wrapper to `lib.dll`"""
__metaclass__ = Singleton
def __init__(self):
self._lib = cdll.LoadLibrary('lib.dll')
def _configure_logger(self):
self.logger = logging.getLogger("MyLib")
self.logger.setLevel(logging.DEBUG)
ch = logging.StreamHandler(sys.stdout)
ch.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
self.logger.addHandler(ch)
def foo(self, a):
return_code = self._lib.foo(a)
self.logger.debug("foo(%d) returned %d" % (a, return_code))
if return_code < 0: raise MyLibError (return_code)
The final goal is to auto-generate this wrapper from the library codebase where I will be able to export the docstrings properly.
You may want to look at the functools module for some built-in functions that can help.
In Python, functions are first-class objects and so can be passed around and acted on like other objects. One step here would be to put the common code into a function. Something like
def _call_my_lib(self, f, args):
return_code = f(*args)
self.logger.debug("%s(%d) returned %d" % (f.__name__, args, return_code))
if return_code < 0: raise MyLibError (return_code)
def foo(self, *args):
self._call_my_lib(self._lib.foo, args)
The functools.partial function can take care of "partially" calling a function at one time, then finish the arguments at a later time. It might be a little tricky to combine bound methods with functools, but it might be nice and compact.
foo = functools.partial(self._call_my_lib, self._lib.foo)
A decorator may be useful here, especially if you need to to any other processing in Python before calling into your library.
def mylib(f):
def _call_my_lib(self, *args):
return_code = f(*args)
self.logger.debug("%s(%d) returned %d" % (f.__name__, args, return_code))
if return_code < 0: raise MyLibError (return_code)
return _call_my_lib
class MyLib:
#mylib
def foo(self, a):
return self._lib.foo(a)
Note that I haven't tested any of this code, so it may contain typos or some oversights but should provide some ideas for ways to implement cross-cutting concerns in Python.
I'm trying to write a function decorator that wraps functions. If the function is called with dry=True, the function should simply print its name and arguments. If it's called with dry=False, or without dry as an argument, it should run normally.
I have a toy working with:
from functools import wraps
def drywrap(func):
#wraps(func)
def dryfunc(*args, **kwargs):
dry = kwargs.get('dry')
if dry is not None and dry:
print("Dry run {} with args {} {}".format(func.func_name, args, kwargs))
else:
if dry is not None:
del kwargs['dry']
return func(*args, **kwargs)
return dryfunc
#drywrap
def a(something):
# import pdb; pdb.set_trace()
print("a")
print(something)
a('print no dry')
a('print me', dry=False)
a('print me too', dry=True)
...but when I put this into my application, the print statement is never run even when dry=True in the arguments. I've also tried:
from functools import wraps
def drywrap(func):
#wraps(func)
def dryfunc(*args, **kwargs):
def printfunc(*args, **kwargs):
print("Dry run {} with args {} {}".format(func.func_name,args,kwargs))
dry = kwargs.get('dry')
if dry is not None and dry:
return printfunc(*args, **kwargs)
else:
if dry is not None:
del kwargs['dry']
return func(*args, **kwargs)
return dryfunc
I should note that as part of my applications, the drywrap function is part of utils file that is imported before being used...
src/build/utils.py <--- has drywrap
src/build/build.py
src/build/plugins/subclass_build.py <--- use drywrap via "from build.utils import drywrap"
The following github project provides "a useful dry-run decorator for python" that allows checking "the basic logic of our programs without running certain operations that are lengthy and cause side effects."
see https://github.com/haarcuba/dryable
This feels like a solution.
def drier(dry=False):
def wrapper(func):
def inner_wrapper(*arg,**kwargs):
if dry:
print(func.__name__)
print(arg)
print(kwargs)
else:
return func(*arg,**kwargs)
return inner_wrapper
return wrapper
#drier(True)
def test(name):
print("hello "+name)
test("girish")
#drier()
def test2(name,last):
print("hello {} {}".format(name,last))
But as you can see even when you dont have any arguments you have to give #drier() instead of #drier .
Note : Using python 3.4.1
I want to use a decorator to handle auditing of various functions (mainly Django view functions, but not exclusively). In order to do this I would like to be able to audit the function post-execution - i.e. the function runs as normal, and if it returns without an exception, then the decorator logs the fact.
Something like:
#audit_action(action='did something')
def do_something(*args, **kwargs):
if args[0] == 'foo':
return 'bar'
else:
return 'baz'
Where audit_action would only run after the function has completed.
Decorators usually return a wrapper function; just put your logic in the wrapper function after invoking the wrapped function.
def audit_action(action):
def decorator_func(func):
def wrapper_func(*args, **kwargs):
# Invoke the wrapped function first
retval = func(*args, **kwargs)
# Now do something here with retval and/or action
print('In wrapper_func, handling action {!r} after wrapped function returned {!r}'.format(action, retval))
return retval
return wrapper_func
return decorator_func
So audit_action(action='did something') is a decorator factory that returns a scoped decorator_func, which is used to decorate your do_something (do_something = decorator_func(do_something)).
After decorating, your do_something reference has been replaced by wrapper_func. Calling wrapper_func() causes the original do_something() to be called, and then your code in the wrapper func can do things.
The above code, combined with your example function, gives the following output:
>>> do_something('foo')
In wrapper_func, handling action 'did something' after wrapped function returned 'bar'
'bar'
Your decorator can handle it here itself, like
def audit_action(function_to_decorate):
def wrapper(*args, **kw):
# Calling your function
output = function_to_decorate(*args, **kw)
# Below this line you can do post processing
print "In Post Processing...."
return output
return wrapper