Composing functionality into a method - python

I have something written in Perl/Moose that uses method modifiers (before/after/around) and I'm wondering what the "Pythonic" way of composing functionality into a method would be?
To expand, I have a data structure that needs to be processed, data is "tagged" to mark it in terms of how it should be processed. I check those tags and then dynamically build my processor accordingly. Using Perl's Moose I can pull in the "traits" I need and then run a single method to process the data. Each "Role" I pull in assumes there's a "process" method and it just bolts it's functionality on with an after 'process' => sub { }. This way if I have a new type of data to deal with I can simply add another Role without having to touch any other code. All this specifically avoids any inheritance and is very simple to manage.
I thought about trying to recreate Moose's "method modifiers" using decorators and/or metaclasses, but I'm wondering what the idiomatic way would be in Python...
Thanks

I hardly know any Perl, but from looking at the first example in the MethodModifier docs, it seems to me context managers in Python would be the thing that most resembles Moose's method modifiers:
class MyContextManager(object):
def __init__(self, data):
self.data = data
def __call__(self):
print 'foo'
def __enter__(self):
print "entering... (data=%s)" % self.data
# this will be bound to the name after the 'as' clause
return self
def __exit__(self, exc_type, exc_value, traceback):
print "exiting... (data=%s)" % self.data
with MyContextManager('bar') as manager:
print "before"
manager()
print "after"
Output:
entering... (data=bar)
before
foo
after
exiting... (data=bar)
Unfortunately, documentation for context managers is a bit spread out:
contextlib docs
Context Manager Types
With Statement Context Managers

Just to expand on #LukasGraf answer pulling in contexts dynamically... (in Python 3)
from contextlib import ExitStack
class Manager1():
def __init__(self, data):
self.data = data
def __enter__(self):
print("entering 1... (data=%s)" % self.data)
return self
def __exit__(self, exc_type, exc_value, traceback):
print("exiting 1... (data=%s)" % self.data)
class Manager2():
def __init__(self, data):
self.data = data
def __enter__(self):
print("entering 2... (data=%s)" % self.data)
return self
def __exit__(self, exc_type, exc_value, traceback):
print("exiting 2... (data=%s)" % self.data
contexts = [Manager2('test'), Manager1('test')]
with ExitStack() as stack:
[ stack.enter_context(context) for context in contexts ]
print('here')
generates...
entering 2... (data=test)
entering 1... (data=test)
here
exiting 1... (data=test)
exiting 2... (data=test)

Related

Trivial context manager in Python

My resource can by of type R1 which requires locking or of type R2
which does not require it:
class MyClass(object): # broken
def __init__ (self, ...):
if ...:
self.resource = R1(...)
self.lock = threading.Lock()
else:
self.resource = R2(...)
self.lock = None
def foo(self): # there are many locking methods
with self.lock:
operate(self.resource)
The above obviously fails if self.lock is None.
My options are:
if:
def foo(self):
if self.lock:
with self.lock:
operate(self.resource)
else:
operate(self.resource)
cons: too verbose
pro: does not create an unnecessary threading.Lock
always set self.lock to threading.Lock
pro: code is simplified
cons: with self.lock appears to be relatively expensive
(comparable to disk i/o!)
define a trivial lock class:
class TrivialLock(object):
def __enter__(self): pass
def __exit__(self, _a, _b, _c): pass
def acquire(self): pass
def release(self): pass
and use it instead of None for R2.
pro: simple code
cons: I have to define TrivialLock
Questions
What method is preferred by the community?
Regardless of (1), does anyone actually define something like
TrivialLock? (I actually expected that something like that would be
in the standard library...)
Is my observation that locking cost is comparable to that of a
write conforms to expectations?
I would define TrivialLock. It can be even more trivial, though, since you just need a context manager, not a lock.
class TrivialLock(object):
def __enter__(self):
pass
def __exit__(*args):
pass
You can make this even more trivial using contextlib:
import contextlib
#contextlib.contextmanager
def TrivialLock():
yield
self.lock = TrivialLock()
And since yield can be an expression, you can define TrivalLock inline instead:
self.lock = contextlib.contextmanager(lambda: (yield))()
Note the parentheses; lambda: yield is invalid. However, the generator expression (yield) makes this a single-use context manager; if you try to use the same value in a second with statement, you get a Runtime error because the generator is exhausted.

Python How to force object instantiation via Context Manager?

I want to force object instantiation via class context manager. So make it impossible to instantiate directly.
I implemented this solution, but technically user can still instantiate object.
class HessioFile:
"""
Represents a pyhessio file instance
"""
def __init__(self, filename=None, from_context_manager=False):
if not from_context_manager:
raise HessioError('HessioFile can be only use with context manager')
And context manager:
#contextmanager
def open(filename):
"""
...
"""
hessfile = HessioFile(filename, from_context_manager=True)
Any better solution ?
If you consider that your clients will follow basic python coding principles then you can guarantee that no method from your class will be called if you are not within the context.
Your client is not supposed to call __enter__ explicitly, therefore if __enter__ has been called you know your client used a with statement and is therefore inside context (__exit__ will be called).
You just need to have a boolean variable that helps you remember if you are inside or outside context.
class Obj:
def __init__(self):
self._inside_context = False
def __enter__(self):
self._inside_context = True
print("Entering context.")
return self
def __exit__(self, *exc):
print("Exiting context.")
self._inside_context = False
def some_stuff(self, name):
if not self._inside_context:
raise Exception("This method should be called from inside context.")
print("Doing some stuff with", name)
def some_other_stuff(self, name):
if not self._inside_context:
raise Exception("This method should be called from inside context.")
print("Doing some other stuff with", name)
with Obj() as inst_a:
inst_a.some_stuff("A")
inst_a.some_other_stuff("A")
inst_b = Obj()
with inst_b:
inst_b.some_stuff("B")
inst_b.some_other_stuff("B")
inst_c = Obj()
try:
inst_c.some_stuff("c")
except Exception:
print("Instance C couldn't do stuff.")
try:
inst_c.some_other_stuff("c")
except Exception:
print("Instance C couldn't do some other stuff.")
This will print:
Entering context.
Doing some stuff with A
Doing some other stuff with A
Exiting context.
Entering context.
Doing some stuff with B
Doing some other stuff with B
Exiting context.
Instance C couldn't do stuff.
Instance C couldn't do some other stuff.
Since you'll probably have many methods that you want to "protect" from being called from outside context, then you can write a decorator to avoid repeating the same code to test for your boolean:
def raise_if_outside_context(method):
def decorator(self, *args, **kwargs):
if not self._inside_context:
raise Exception("This method should be called from inside context.")
return method(self, *args, **kwargs)
return decorator
Then change your methods to:
#raise_if_outside_context
def some_other_stuff(self, name):
print("Doing some other stuff with", name)
I suggest the following approach:
class MainClass:
def __init__(self, *args, **kwargs):
self._class = _MainClass(*args, **kwargs)
def __enter__(self):
print('entering...')
return self._class
def __exit__(self, exc_type, exc_val, exc_tb):
# Teardown code
print('running exit code...')
pass
# This class should not be instantiated directly!!
class _MainClass:
def __init__(self, attribute1, attribute2):
self.attribute1 = attribute1
self.attribute2 = attribute2
...
def method(self):
# execute code
if self.attribute1 == "error":
raise Exception
print(self.attribute1)
print(self.attribute2)
with MainClass('attribute1', 'attribute2') as main_class:
main_class.method()
print('---')
with MainClass('error', 'attribute2') as main_class:
main_class.method()
This will outptut:
entering...
attribute1
attribute2
running exit code...
---
entering...
running exit code...
Traceback (most recent call last):
File "scratch_6.py", line 34, in <module>
main_class.method()
File "scratch_6.py", line 25, in method
raise Exception
Exception
None that I am aware of. Generally, if it exists in python, you can find a way to call it. A context manager is, in essence, a resource management scheme... if there is no use-case for your class outside of the manager, perhaps the context management could be integrated into the methods of the class? I would suggest checking out the atexit module from the standard library. It allows you to register cleanup functions much in the same way that a context manager handles cleanup, but you can bundle it into your class, such that each instantiation has a registered cleanup function. Might help.
It is worth noting that no amount of effort will prevent people from doing stupid things with your code. Your best bet is generally to make it as easy as possible for people to do smart things with your code.
You can think of hacky ways to try and enforce this (like inspecting the call stack to forbid direct calls to your object, boolean attribute that is set upon __enter__ that you check before allowing other actions on the instance) but that will eventually become a mess to understand and explain to others.
Irregardless, you should also be certain that people will always find ways to bypass it if wanted. Python doesn't really tie your hands down, if you want to do something silly it lets you do it; responsible adults, right?
If you need an enforcement, you'd be better off supplying it as a documentation notice. That way if users opt to instantiate directly and trigger unwanted behavior, it's their fault for not following guidelines for your code.

wrapping class method in try / except using decorator

I have a general purpose function that sends info about exceptions to an application log.
I use the exception_handler function from within methods in classes. The app log handler that is passed into and called by the exception_handler creates a JSON string that is what actually gets sent to the logfile. This all works fine.
def exception_handler(log, terminate=False):
exc_type, exc_value, exc_tb = sys.exc_info()
filename, line_num, func_name, text = traceback.extract_tb(exc_tb)[-1]
log.error('{0} Thrown from module: {1} in {2} at line: {3} ({4})'.format(exc_value, filename, func_name, line_num, text))
del (filename, line_num, func_name, text)
if terminate:
sys.exit()
I use it as follows: (a hyper-simplified example)
from utils import exception_handler
class Demo1(object):
def __init__(self):
self.log = {a class that implements the application log}
def demo(self, name):
try:
print(name)
except Exception:
exception_handler(self.log, True)
I would like to alter exception_handler for use as a decorator for a large number of methods, i.e.:
#handle_exceptions
def func1(self, name)
{some code that gets wrapped in a try / except by the decorator}
I've looked at a number of articles about decorators, but I haven't yet figured out how to implement what I want to do. I need to pass a reference to the active log object and also pass 0 or more arguments to the wrapped function. I'd be happy to convert exception_handler to a method in a class if that makes things easier.
Such a decorator would simply be:
def handle_exceptions(f):
def wrapper(*args, **kw):
try:
return f(*args, **kw)
except Exception:
self = args[0]
exception_handler(self.log, True)
return wrapper
This decorator simply calls the wrapped function inside a try suite.
This can be applied to methods only, as it assumes the first argument is self.
Thanks to Martijn for pointing me in the right direction.
I couldn't get his suggested solution to work but after a little more searching based on his example the following works fine:
def handle_exceptions(fn):
from functools import wraps
#wraps(fn)
def wrapper(self, *args, **kw):
try:
return fn(self, *args, **kw)
except Exception:
exception_handler(self.log)
return wrapper

Encapsulating retries into `with` block

I'm looking to encapsulate logic for database transactions into a with block; wrapping the code in a transaction and handling various exceptions (locking issues). This is simple enough, however I'd like to also have the block encapsulate the retrying of the code block following certain exceptions. I can't see a way to package this up neatly into the context manager.
Is it possible to repeat the code within a with statement?
I'd like to use it as simply as this, which is really neat.
def do_work():
...
# This is ideal!
with transaction(retries=3):
# Atomic DB statements
...
...
I'm currently handling this with a decorator, but I'd prefer to offer the context manager (or in fact both), so I can choose to wrap a few lines of code in the with block instead of an inline function wrapped in a decorator, which is what I do at the moment:
def do_work():
...
# This is not ideal!
#transaction(retries=3)
def _perform_in_transaction():
# Atomic DB statements
...
_perform_in_transaction()
...
Is it possible to repeat the code within a with statement?
No.
As pointed out earlier in that mailing list thread, you can reduce a bit of duplication by making the decorator call the passed function:
def do_work():
...
# This is not ideal!
#transaction(retries=3)
def _perform_in_transaction():
# Atomic DB statements
...
# called implicitly
...
The way that occurs to me to do this is just to implement a standard database transaction context manager, but allow it to take a retries argument in the constructor. Then I'd just wrap that up in your method implementations. Something like this:
class transaction(object):
def __init__(self, retries=0):
self.retries = retries
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, traceback):
pass
# Implementation...
def execute(self, query):
err = None
for _ in range(self.retries):
try:
return self._cursor.execute(query)
except Exception as e:
err = e # probably ought to save all errors, but hey
raise err
with transaction(retries=3) as cursor:
cursor.execute('BLAH')
As decorators are just functions themselves, you could do the following:
with transaction(_perform_in_transaction, retries=3) as _perf:
_perf()
For the details, you'd need to implement transaction() as a factory method that returns an object with __callable__() set to call the original method and repeat it up to retries number of times on failure; __enter__() and __exit__() would be defined as normal for database transaction context managers.
You could alternatively set up transaction() such that it itself executes the passed method up to retries number of times, which would probably require about the same amount of work as implementing the context manager but would mean actual usage would be reduced to just transaction(_perform_in_transaction, retries=3) (which is, in fact, equivalent to the decorator example delnan provided).
While I agree it can't be done with a context manager... it can be done with two context managers!
The result is a little awkward, and I am not sure whether I approve of my own code yet, but this is what it looks like as the client:
with RetryManager(retries=3) as rm:
while rm:
with rm.protect:
print("Attempt #%d of %d" % (rm.attempt_count, rm.max_retries))
# Atomic DB statements
There is an explicit while loop still, and not one, but two, with statements, which leaves a little too much opportunity for mistakes for my liking.
Here's the code:
class RetryManager(object):
""" Context manager that counts attempts to run statements without
exceptions being raised.
- returns True when there should be more attempts
"""
class _RetryProtector(object):
""" Context manager that only raises exceptions if its parent
RetryManager has given up."""
def __init__(self, retry_manager):
self._retry_manager = retry_manager
def __enter__(self):
self._retry_manager._note_try()
return self
def __exit__(self, exc_type, exc_val, traceback):
if exc_type is None:
self._retry_manager._note_success()
else:
# This would be a good place to implement sleep between
# retries.
pass
# Suppress exception if the retry manager is still alive.
return self._retry_manager.is_still_trying()
def __init__(self, retries=1):
self.max_retries = retries
self.attempt_count = 0 # Note: 1-based.
self._success = False
self.protect = RetryManager._RetryProtector(self)
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, traceback):
pass
def _note_try(self):
self.attempt_count += 1
def _note_success(self):
self._success = True
def is_still_trying(self):
return not self._success and self.attempt_count < self.max_retries
def __bool__(self):
return self.is_still_trying()
Bonus: I know you don't want to separate your work off into separate functions wrapped with decorators... but if you were happy with that, the redo package from Mozilla offers the decorators to do that, so you don't have to roll your own. There is even a Context Manager that effective acts as temporary decorator for your function, but it still relies on your retrievable code to be factored out into a single function.
This question is a few years old but after reading the answers I decided to give this a shot.
This solution requires the use of a "helper" class, but I I think it does provide an interface with retries configured through a context manager.
class Client:
def _request(self):
# do request stuff
print("tried")
raise Exception()
def request(self):
retry = getattr(self, "_retry", None)
if not retry:
return self._request()
else:
for n in range(retry.tries):
try:
return self._request()
except Exception:
retry.attempts += 1
class Retry:
def __init__(self, client, tries=1):
self.client = client
self.tries = tries
self.attempts = 0
def __enter__(self):
self.client._retry = self
def __exit__(self, *exc):
print(f"Tried {self.attempts} times")
del self.client._retry
>>> client = Client()
>>> with Retry(client, tries=3):
... # will try 3 times
... response = client.request()
tried once
tried once
tried once
Tried 3 times

Logging lock acquire and release calls in multi-threaded application

I am trying to debug a multi-threaded Python application that uses various locks.
Rather than place log.debug(...) statements all over the shot to track where and when the locks are acquired and released, my idea is to decorate the methods threading.Lock.acquire() and threading.Lock.release(), and prefix their invocation with something like the following:
log.debug("lock::acquire() [%s.%s.%s]" %
(currentThread().getName(),
self.__class__.__name__,
sys._getframe().f_code.co_name))
Where log is some global logging object - for the sake of discussion.
Now ideally the name "lock" in the log entry should be derived at runtime, so that irrespective of which lock object these methods are invoked upon the log will output its name, the operation decorated, the current thread, class, and function in which the operation (acquire | release) is called.
Disclaimer: I acknowledge that the code given above would not be sufficient for any such decorator implementation. It is only provided just to give a flavour of what I think could be achieved.
Does anyone know if I can decorate standard library methods, without doctoring the original source code of the threading library, i.e., from within my calling application code?
Perhaps I am barking up the wrong tree and there is a better way of achieving the same ends, without using decorators? Many thanks in advance for any guidance if this is indeed the case.
Solution: (inspired by lazyr)
The following code logs the lock operations and gives me the name of the method/function calling the lock operation (I am also adapting the code to work with Conditions and their additional wait() and notify() methods):
# Class to wrap Lock and simplify logging of lock usage
class LogLock(object):
"""
Wraps a standard Lock, so that attempts to use the
lock according to its API are logged for debugging purposes
"""
def __init__(self, name, log):
self.name = str(name)
self.log = log
self.lock = threading.Lock()
self.log.debug("{0} created {1}".format(
inspect.stack()[1][3], self.name))
def acquire(self, blocking=True):
self.log.debug("{0} trying to acquire {1}".format(
inspect.stack()[1][3], self.name))
ret = self.lock.acquire(blocking)
if ret == True:
self.log.debug("{0} acquired {1}".format(
inspect.stack()[1][3], self.name))
else:
self.log.debug("{0} non-blocking acquire of {1} lock failed".format(
inspect.stack()[1][3], self.name))
return ret
def release(self):
self.log.debug("{0} releasing {1}".format(inspect.stack()[1][3], self.name))
self.lock.release()
def __enter__(self):
self.acquire()
def __exit__(self, exc_type, exc_val, exc_tb):
self.release()
return False # Do not swallow exceptions
Where the log instance passed to LogLock.init was defined with a logging.Formatter as follows to given me the calling thread's identity:
# With the following format
log_format = \
logging.Formatter('%(asctime)s %(levelname)s %(threadName)s %(message)s')
I recently had just your problem. I set up my logger to automatically log thread name, like in this answer. I found out it was not possible to subclass Lock, so I had to wrap it, like this:
class LogLock(object):
def __init__(self, name):
self.name = str(name)
self.lock = Lock()
def acquire(self, blocking=True):
log.debug("{0:x} Trying to acquire {1} lock".format(
id(self), self.name))
ret = self.lock.acquire(blocking)
if ret == True:
log.debug("{0:x} Acquired {1} lock".format(
id(self), self.name))
else:
log.debug("{0:x} Non-blocking aquire of {1} lock failed".format(
id(self), self.name))
return ret
def release(self):
log.debug("{0:x} Releasing {1} lock".format(id(self), self.name))
self.lock.release()
def __enter__(self):
self.acquire()
def __exit__(self, exc_type, exc_val, exc_tb):
self.release()
return False # Do not swallow exceptions
I logged the id of the object so I could distinguish between multiple locks with the same name, you might not need it.

Categories