Trivial context manager in Python - python

My resource can by of type R1 which requires locking or of type R2
which does not require it:
class MyClass(object): # broken
def __init__ (self, ...):
if ...:
self.resource = R1(...)
self.lock = threading.Lock()
else:
self.resource = R2(...)
self.lock = None
def foo(self): # there are many locking methods
with self.lock:
operate(self.resource)
The above obviously fails if self.lock is None.
My options are:
if:
def foo(self):
if self.lock:
with self.lock:
operate(self.resource)
else:
operate(self.resource)
cons: too verbose
pro: does not create an unnecessary threading.Lock
always set self.lock to threading.Lock
pro: code is simplified
cons: with self.lock appears to be relatively expensive
(comparable to disk i/o!)
define a trivial lock class:
class TrivialLock(object):
def __enter__(self): pass
def __exit__(self, _a, _b, _c): pass
def acquire(self): pass
def release(self): pass
and use it instead of None for R2.
pro: simple code
cons: I have to define TrivialLock
Questions
What method is preferred by the community?
Regardless of (1), does anyone actually define something like
TrivialLock? (I actually expected that something like that would be
in the standard library...)
Is my observation that locking cost is comparable to that of a
write conforms to expectations?

I would define TrivialLock. It can be even more trivial, though, since you just need a context manager, not a lock.
class TrivialLock(object):
def __enter__(self):
pass
def __exit__(*args):
pass
You can make this even more trivial using contextlib:
import contextlib
#contextlib.contextmanager
def TrivialLock():
yield
self.lock = TrivialLock()
And since yield can be an expression, you can define TrivalLock inline instead:
self.lock = contextlib.contextmanager(lambda: (yield))()
Note the parentheses; lambda: yield is invalid. However, the generator expression (yield) makes this a single-use context manager; if you try to use the same value in a second with statement, you get a Runtime error because the generator is exhausted.

Related

Multiple ways to invoke context manager in python

Background
I have a class in python that takes in a list of mutexes. It then sorts that list, and uses __enter__() and __exit__() to lock/unlock all of the mutexes in a specific order to prevent deadlocks.
The class currently saves us a lot of hassle with potential deadlocks, as we can just invoke it in an RAII style, i.e.:
self.lock = SuperLock(list_of_locks)
# Lock all mutexes.
with self.lock:
# Issue calls to all hardware protected by these locks.
Problem
We'd like to expose ways for this class to provide an RAII-style API so we can lock only half of the mutexes at once, when called in a certain way, i.e.:
self.lock = SuperLock(list_of_locks)
# Lock all mutexes.
with self.lock:
# Issue calls to all hardware protected by these locks.
# Lock the first half of the mutexes in SuperLock.list_of_locks
with self.lock.first_half_only:
# Issue calls to all hardware protected by these locks.
# Lock the second half of the mutexes in SuperLock.list_of_locks
with self.lock.second_half_only:
# Issue calls to all hardware protected by these locks.
Question
Is there a way to provide this type of functionality so I could invoke with self.lock.first_half_only or with self.lock_first_half_only() to provide a simple API to users? We'd like to keep all this functionality in a single class.
Thank you.
Yes, you can get this interface. The object that will be entered/exited in context of a with statement is the resolved attribute. So you can go ahead and define context managers as attributes of your context manager:
from contextlib import ExitStack # pip install contextlib2
from contextlib import contextmanager
#contextmanager
def lock(name):
print("entering lock {}".format(name))
yield
print("exiting lock {}".format(name))
#contextmanager
def many(contexts):
with ExitStack() as stack:
for cm in contexts:
stack.enter_context(cm)
yield
class SuperLock(object):
def __init__(self, list_of_locks):
self.list_of_locks = list_of_locks
def __enter__(self):
# implement for entering the `with self.lock:` use case
return self
def __exit__(self, exce_type, exc_value, traceback):
pass
#property
def first_half_only(self):
return many(self.list_of_locks[:4])
#property
def second_half_only(self):
# yo dawg, we herd you like with-statements
return many(self.list_of_locks[4:])
When you create and return a new context manager, you may use state from the instance (i.e. self).
Example usage:
>>> list_of_locks = [lock(i) for i in range(8)]
>>> super_lock = SuperLock(list_of_locks)
>>> with super_lock.first_half_only:
... print('indented')
...
entering lock 0
entering lock 1
entering lock 2
entering lock 3
indented
exiting lock 3
exiting lock 2
exiting lock 1
exiting lock 0
Edit: class based equivalent of the lock generator context manager shown above
class lock(object):
def __init__(self, name):
self.name = name
def __enter__(self):
print("entering lock {}".format(self.name))
return self
def __exit__(self, exce_type, exc_value, traceback):
print("exiting lock {}".format(self.name))
# If you want to handle the exception (if any), you may use the
# return value of this method to suppress re-raising error on exit
from contextlib import contextmanager
class A:
#contextmanager
def i_am_lock(self):
print("entering")
yield
print("leaving")
a = A()
with a.i_am_lock():
print("inside")
Output:
entering
inside
leaving
Futher you can use contextlib.ExitStack to manage your locks better.
I'd use a SimpleNamespace to allow attribute access to different SuperLock objects, e.g.:
from types import SimpleNamespace
self.lock = SimpleNamespace(
all=SuperLock(list_of_locks),
first_two_locks=SuperLock(list_of_locks[:2]),
other_locks=SuperLock(list_of_locks[2:])
)
with self.lock.all:
# Issue calls to all hardware protected by these locks.
with self.lock.first_two_locks:
# Issue calls to all hardware protected by these locks.
with self.lock.other_locks:
# Issue calls to all hardware protected by these locks.
Edit:
For python 2, you can use this class to achieve a similar behavior:
class SimpleNamespace:
def __init__(self, **kwargs):
self.__dict__.update(kwargs)

Pythonic way to encapsulate method arguments of a class

Objects of my class A are similar to network connections, i.e. characterized by a handle per connection opened. That is, one calls different methods with a handle (a particular connection) as argument. My class A (python 2.7) looks like:
class A(object):
def __init__(self, *args):
... some init
def my_open(self, *args)
handle = ... some open
return handle
def do_this(self, handle, *args):
foo_this(handle, args)
def do_that(self, handle, *args):
foo_that(handle, args)
A typical usage is
a = A(args)
handle = a.my_open(args2)
a.do_this(handle, args3)
Now, in a particular situation, there is only one connection to take care of, i.e. one handle in play. So, it is reasonable to hide this handle but keep class A for the more general situation. Thus, my first thoughts on a class B
which "is a" kind of class A (usage stays the same but hides handle) are:
class B(A):
def __init__(self, *args):
super(A, self).__init__(args)
self.handle = None
def my_open(self, *args):
self.handle = super(A, self).__init__(args)
def do_this(self, *args):
super(A, self).do_this(self.handle, args)
def do_that(self, *args):
super(A, self).do_that(self.handle, args)
Unfortunately, in my opinion, it seems very convoluted. Any better ideas?
Objects of my class A are similar to network connections, i.e. characterized by a handle per connection opened. That is, one calls different methods with a handle (a particular connection) as argument.
You have inverted the responsibility. The handle object holds the state the methods operate on, so those methods should live on the handle, not the factory.
Move your methods to the handle object, so the API becomes:
a = A(args)
handle = a.my_open(args2)
handle.do_this(args3)
The class implementing the handle() could retain a reference to a if so required; that's an implementation detail that the users of the API don't need to worry about.
You then return new handles, or a singleton handle, as needed.
By moving responsibility to the handle object, you can also make your factory produce handles of entirely different types, depending on the arguments. A(args).my_open(args2) could also produce the singleton handle that you now have class B for, for example.
How about a class for the handle itself?:
class Handle(object):
def __init__(self, *args):
# init ...
self._handle = low_level_handle
def do_this(self, *args):
# do_this ...
pass
def do_that(self, *args):
# do_that
pass
class A(object):
def __init__(self, *args):
# init ...
def my_open(self, *args):
handle = Handle(args)
# handle post-processing (if any)
return handle
e.g.:
a = A(args)
handle = a.my_open(args2)
handle.do_this(args3)

Python How to force object instantiation via Context Manager?

I want to force object instantiation via class context manager. So make it impossible to instantiate directly.
I implemented this solution, but technically user can still instantiate object.
class HessioFile:
"""
Represents a pyhessio file instance
"""
def __init__(self, filename=None, from_context_manager=False):
if not from_context_manager:
raise HessioError('HessioFile can be only use with context manager')
And context manager:
#contextmanager
def open(filename):
"""
...
"""
hessfile = HessioFile(filename, from_context_manager=True)
Any better solution ?
If you consider that your clients will follow basic python coding principles then you can guarantee that no method from your class will be called if you are not within the context.
Your client is not supposed to call __enter__ explicitly, therefore if __enter__ has been called you know your client used a with statement and is therefore inside context (__exit__ will be called).
You just need to have a boolean variable that helps you remember if you are inside or outside context.
class Obj:
def __init__(self):
self._inside_context = False
def __enter__(self):
self._inside_context = True
print("Entering context.")
return self
def __exit__(self, *exc):
print("Exiting context.")
self._inside_context = False
def some_stuff(self, name):
if not self._inside_context:
raise Exception("This method should be called from inside context.")
print("Doing some stuff with", name)
def some_other_stuff(self, name):
if not self._inside_context:
raise Exception("This method should be called from inside context.")
print("Doing some other stuff with", name)
with Obj() as inst_a:
inst_a.some_stuff("A")
inst_a.some_other_stuff("A")
inst_b = Obj()
with inst_b:
inst_b.some_stuff("B")
inst_b.some_other_stuff("B")
inst_c = Obj()
try:
inst_c.some_stuff("c")
except Exception:
print("Instance C couldn't do stuff.")
try:
inst_c.some_other_stuff("c")
except Exception:
print("Instance C couldn't do some other stuff.")
This will print:
Entering context.
Doing some stuff with A
Doing some other stuff with A
Exiting context.
Entering context.
Doing some stuff with B
Doing some other stuff with B
Exiting context.
Instance C couldn't do stuff.
Instance C couldn't do some other stuff.
Since you'll probably have many methods that you want to "protect" from being called from outside context, then you can write a decorator to avoid repeating the same code to test for your boolean:
def raise_if_outside_context(method):
def decorator(self, *args, **kwargs):
if not self._inside_context:
raise Exception("This method should be called from inside context.")
return method(self, *args, **kwargs)
return decorator
Then change your methods to:
#raise_if_outside_context
def some_other_stuff(self, name):
print("Doing some other stuff with", name)
I suggest the following approach:
class MainClass:
def __init__(self, *args, **kwargs):
self._class = _MainClass(*args, **kwargs)
def __enter__(self):
print('entering...')
return self._class
def __exit__(self, exc_type, exc_val, exc_tb):
# Teardown code
print('running exit code...')
pass
# This class should not be instantiated directly!!
class _MainClass:
def __init__(self, attribute1, attribute2):
self.attribute1 = attribute1
self.attribute2 = attribute2
...
def method(self):
# execute code
if self.attribute1 == "error":
raise Exception
print(self.attribute1)
print(self.attribute2)
with MainClass('attribute1', 'attribute2') as main_class:
main_class.method()
print('---')
with MainClass('error', 'attribute2') as main_class:
main_class.method()
This will outptut:
entering...
attribute1
attribute2
running exit code...
---
entering...
running exit code...
Traceback (most recent call last):
File "scratch_6.py", line 34, in <module>
main_class.method()
File "scratch_6.py", line 25, in method
raise Exception
Exception
None that I am aware of. Generally, if it exists in python, you can find a way to call it. A context manager is, in essence, a resource management scheme... if there is no use-case for your class outside of the manager, perhaps the context management could be integrated into the methods of the class? I would suggest checking out the atexit module from the standard library. It allows you to register cleanup functions much in the same way that a context manager handles cleanup, but you can bundle it into your class, such that each instantiation has a registered cleanup function. Might help.
It is worth noting that no amount of effort will prevent people from doing stupid things with your code. Your best bet is generally to make it as easy as possible for people to do smart things with your code.
You can think of hacky ways to try and enforce this (like inspecting the call stack to forbid direct calls to your object, boolean attribute that is set upon __enter__ that you check before allowing other actions on the instance) but that will eventually become a mess to understand and explain to others.
Irregardless, you should also be certain that people will always find ways to bypass it if wanted. Python doesn't really tie your hands down, if you want to do something silly it lets you do it; responsible adults, right?
If you need an enforcement, you'd be better off supplying it as a documentation notice. That way if users opt to instantiate directly and trigger unwanted behavior, it's their fault for not following guidelines for your code.

Encapsulating retries into `with` block

I'm looking to encapsulate logic for database transactions into a with block; wrapping the code in a transaction and handling various exceptions (locking issues). This is simple enough, however I'd like to also have the block encapsulate the retrying of the code block following certain exceptions. I can't see a way to package this up neatly into the context manager.
Is it possible to repeat the code within a with statement?
I'd like to use it as simply as this, which is really neat.
def do_work():
...
# This is ideal!
with transaction(retries=3):
# Atomic DB statements
...
...
I'm currently handling this with a decorator, but I'd prefer to offer the context manager (or in fact both), so I can choose to wrap a few lines of code in the with block instead of an inline function wrapped in a decorator, which is what I do at the moment:
def do_work():
...
# This is not ideal!
#transaction(retries=3)
def _perform_in_transaction():
# Atomic DB statements
...
_perform_in_transaction()
...
Is it possible to repeat the code within a with statement?
No.
As pointed out earlier in that mailing list thread, you can reduce a bit of duplication by making the decorator call the passed function:
def do_work():
...
# This is not ideal!
#transaction(retries=3)
def _perform_in_transaction():
# Atomic DB statements
...
# called implicitly
...
The way that occurs to me to do this is just to implement a standard database transaction context manager, but allow it to take a retries argument in the constructor. Then I'd just wrap that up in your method implementations. Something like this:
class transaction(object):
def __init__(self, retries=0):
self.retries = retries
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, traceback):
pass
# Implementation...
def execute(self, query):
err = None
for _ in range(self.retries):
try:
return self._cursor.execute(query)
except Exception as e:
err = e # probably ought to save all errors, but hey
raise err
with transaction(retries=3) as cursor:
cursor.execute('BLAH')
As decorators are just functions themselves, you could do the following:
with transaction(_perform_in_transaction, retries=3) as _perf:
_perf()
For the details, you'd need to implement transaction() as a factory method that returns an object with __callable__() set to call the original method and repeat it up to retries number of times on failure; __enter__() and __exit__() would be defined as normal for database transaction context managers.
You could alternatively set up transaction() such that it itself executes the passed method up to retries number of times, which would probably require about the same amount of work as implementing the context manager but would mean actual usage would be reduced to just transaction(_perform_in_transaction, retries=3) (which is, in fact, equivalent to the decorator example delnan provided).
While I agree it can't be done with a context manager... it can be done with two context managers!
The result is a little awkward, and I am not sure whether I approve of my own code yet, but this is what it looks like as the client:
with RetryManager(retries=3) as rm:
while rm:
with rm.protect:
print("Attempt #%d of %d" % (rm.attempt_count, rm.max_retries))
# Atomic DB statements
There is an explicit while loop still, and not one, but two, with statements, which leaves a little too much opportunity for mistakes for my liking.
Here's the code:
class RetryManager(object):
""" Context manager that counts attempts to run statements without
exceptions being raised.
- returns True when there should be more attempts
"""
class _RetryProtector(object):
""" Context manager that only raises exceptions if its parent
RetryManager has given up."""
def __init__(self, retry_manager):
self._retry_manager = retry_manager
def __enter__(self):
self._retry_manager._note_try()
return self
def __exit__(self, exc_type, exc_val, traceback):
if exc_type is None:
self._retry_manager._note_success()
else:
# This would be a good place to implement sleep between
# retries.
pass
# Suppress exception if the retry manager is still alive.
return self._retry_manager.is_still_trying()
def __init__(self, retries=1):
self.max_retries = retries
self.attempt_count = 0 # Note: 1-based.
self._success = False
self.protect = RetryManager._RetryProtector(self)
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, traceback):
pass
def _note_try(self):
self.attempt_count += 1
def _note_success(self):
self._success = True
def is_still_trying(self):
return not self._success and self.attempt_count < self.max_retries
def __bool__(self):
return self.is_still_trying()
Bonus: I know you don't want to separate your work off into separate functions wrapped with decorators... but if you were happy with that, the redo package from Mozilla offers the decorators to do that, so you don't have to roll your own. There is even a Context Manager that effective acts as temporary decorator for your function, but it still relies on your retrievable code to be factored out into a single function.
This question is a few years old but after reading the answers I decided to give this a shot.
This solution requires the use of a "helper" class, but I I think it does provide an interface with retries configured through a context manager.
class Client:
def _request(self):
# do request stuff
print("tried")
raise Exception()
def request(self):
retry = getattr(self, "_retry", None)
if not retry:
return self._request()
else:
for n in range(retry.tries):
try:
return self._request()
except Exception:
retry.attempts += 1
class Retry:
def __init__(self, client, tries=1):
self.client = client
self.tries = tries
self.attempts = 0
def __enter__(self):
self.client._retry = self
def __exit__(self, *exc):
print(f"Tried {self.attempts} times")
del self.client._retry
>>> client = Client()
>>> with Retry(client, tries=3):
... # will try 3 times
... response = client.request()
tried once
tried once
tried once
Tried 3 times

python: closures and classes

I need to register an atexit function for use with a class (see Foo below for an example) that, unfortunately, I have no direct way of cleaning up via a method call: other code, that I don't have control over, calls Foo.start() and Foo.end() but sometimes doesn't call Foo.end() if it encounters an error, so I need to clean up myself.
I could use some advice on closures in this context:
class Foo:
def cleanup(self):
# do something here
def start(self):
def do_cleanup():
self.cleanup()
atexit.register(do_cleanup)
def end(self):
# cleanup is no longer necessary... how do we unregister?
Will the closure work properly, e.g. in do_cleanup, is the value of self bound correctly?
How can I unregister an atexit() routine?
Is there a better way to do this?
edit: this is Python 2.6.5
Make a registry a global registry and a function that calls a function in it, and remove them from there when necessary.
cleaners = set()
def _call_cleaners():
for cleaner in list(cleaners):
cleaner()
atexit.register(_call_cleaners)
class Foo(object):
def cleanup(self):
if self.cleaned:
raise RuntimeError("ALREADY CLEANED")
self.cleaned = True
def start(self):
self.cleaned = False
cleaners.add(self.cleanup)
def end(self):
self.cleanup()
cleaners.remove(self.cleanup)
I think the code is fine. There's no way to unregister, but you can set a boolean flag that would disable cleanup:
class Foo:
def __init__(self):
self.need_cleanup = True
def cleanup(self):
# do something here
print 'clean up'
def start(self):
def do_cleanup():
if self.need_cleanup:
self.cleanup()
atexit.register(do_cleanup)
def end(self):
# cleanup is no longer necessary... how do we unregister?
self.need_cleanup = False
Lastly, bear in mind that atexit handlers don't get called if "the program is killed by a signal not handled by Python, when a Python fatal internal error is detected, or when os._exit() is called."
self is bound correctly inside the callback to do_cleanup, but in fact if all you are doing is calling the method you might as well use the bound method directly.
You use atexit.unregister() to remove the callback, but there is a catch here as you must unregister the same function that you registered and since you used a nested function that means you have to store a reference to that function. If you follow my suggestion of using a bound method then you still have to save a reference to it:
class Foo:
def cleanup(self):
# do something here
def start(self):
self._cleanup = self.cleanup # Need to save the bound method for unregister
atexit.register(self._cleanup)
def end(self):
atexit.unregister(self._cleanup)
Note that it is still possible for your code to exit without calling ther atexit registered functions, for example if the process is aborted with ctrl+break on windows or killed with SIGABRT on linux.
Also as another answer suggests you could just use __del__ but that can be problematic for cleanup while a program is exiting as it may not be called until after other globals it needs to access have been deleted.
Edited to note that when I wrote this answer the question didn't specify Python 2.x. Oh well, I'll leave the answer here anyway in case it helps anyone else.
Since shanked deleted his posting, I'll speak in favor of __del__ again:
import atexit, weakref
class Handler:
def __init__(self, obj):
self.obj = weakref.ref(obj)
def cleanup(self):
if self.obj is not None:
obj = self.obj()
if obj is not None:
obj.cleanup()
class Foo:
def __init__(self):
self.start()
def cleanup(self):
print "cleanup"
self.cleanup_handler = None
def start(self):
self.cleanup_handler = Handler(self)
atexit.register(self.cleanup_handler.cleanup)
def end(self):
if self.cleanup_handler is None:
return
self.cleanup_handler.obj = None
self.cleanup()
def __del__(self):
self.end()
a1=Foo()
a1.end()
a1=Foo()
a2=Foo()
del a2
a3=Foo()
a3.m=a3
This supports the following cases:
objects where .end is called regularly; cleanup right away
objects that are released without .end being called; cleanup when the last
reference goes away
objects living in cycles; cleanup atexit
objects that are kept alive; cleanup atexit
Notice that it is important that the cleanup handler holds a weak reference
to the object, as it would otherwise keep the object alive.
Edit: Cycles involving Foo will not be garbage-collected, since Foo implements __del__. To allow for the cycle being deleted at garbage collection time, the cleanup must be taken out of the cycle.
class Cleanup:
cleaned = False
def cleanup(self):
if self.cleaned:
return
print "cleanup"
self.cleaned = True
def __del__(self):
self.cleanup()
class Foo:
def __init__(self):...
def start(self):
self.cleaner = Cleanup()
atexit.register(Handler(self).cleanup)
def cleanup(self):
self.cleaner.cleanup()
def end(self):
self.cleanup()
It's important that the Cleanup object has no references back to Foo.
Why don't you try it? It only took me a minute to check.
(Answer: Yes)
However, you can simplify it. The closure isn't needed.
class Foo:
def cleanup(self):
pass
def start(self):
atexit.register(self.cleanup)
And to not cleanup twice, just check in the cleanup method if a cleanup is needed or not before you clean up.

Categories