I have a script that processes some data and if a database/file is present, writes some data into it. I specify the database or file as configargparse(argparse) argument. I need to clean (close file, db) in some organized way in case exceptions occur.
Here is my init:
import sqlite3
import confargparse
import sys
parser.ArgParser(...)
parser.add('--database', dest='database',
help='path to database with grabbers', metavar='FILE',
type=lambda x: arghelper.is_valid_file(parser, x))
parser.add('-f', '--file', type=configargparse.FileType(mode='r'))
args = parser.parse_args()
I did it using if and try:
if args.database:
conn = sqlite3.connect(args.database)
c = conn.cursor()
# same init for file
try:
while True: # do something, it might be moved to some main() function
result = foo()
if args.database:
c.execute('Write to database {}'.format(result))
# same
# for file
finally:
if args.database:
conn.close()
# same
# for file
except KeyboardInterrupt:
print 'keyboard interrupt'
Can it be done with the with statement? Something like (here comes ()?():() from C):
with ((args.database)?
(conn = sqlite3.connect(args.database)):
(None)) as db, same for file:
and then refer to the db inside the with clause and check if they exist?
To answer your question first. It can be done, using contextlib. But I'm not sure how much you would gain from this.
from contextlib import contextmanager
#contextmanager
def uncertain_conn(args):
yield sqlite3.connect(args.database) if args.database else None
# Then you use it like this
with uncertain_conn(args) as conn:
# conn will be the value yielded by uncertain_conn(args)
if conn is not None:
try:
# ...
But as I said, while turning a generator function into a context manager is cool and personally I really like the contextmanager decorator, and it does give you the functionality you say you want, I don't know if it really helps you that much here. If I were you I'd probably just be happy with if:
if args.database:
conn = sqlite3.connect(args.database)
try:
# ...
There are a couple of things you can simplify with with, though. Check out closing, also from contextlib (really simple, I'll just quote the documentation):
contextlib.closing(thing)
Return a context manager that closes thing
upon completion of the block. This is basically equivalent to:
from contextlib import contextmanager
#contextmanager def closing(thing):
try:
yield thing
finally:
thing.close()
So the above code can become:
if args.database:
conn = sqlite3.connect(args.database)
with closing(conn):
# do something; conn.close() will be called no matter what
But this won't print a nice message for KeyboardInterrupt. If you really need that, then I guess you still to have to write out the try-except-finally yourself. Doing anything more fanciful is probably not worth it. (And note that except must precede finally, otherwise you get a syntax error.)
And you can even do this with suppress (but requires a bit of caution; see below)
from contextlib import suppress
with suppress(TypeError):
conn = sqlite3.connect(args.database or None)
with closing(conn):
# do business
with suppress(error): do_thing is equivalent to
try:
do_thing
except error:
pass
So if args.database evaluates to False, the second line is effectively connect(None), which raises a TypeError, which will be caught by the context manager and the code below will be skipped. But the risk is that it will suppress all TypeErrors in its scope, and you may not want that.
You can create your own context manager in such cases. Create one which handles both connections. A context manager is a class which has methods __enter__() and __exit__(). One is called before entering the with clause, one is called when it is left (how ever).
Here's an example for how to do this in your case:
def f(cond1, cond2):
class MultiConnectionContextManager(object):
def __init__(self, cond1, cond2):
self.cond1 = cond1
self.cond2 = cond2
def __enter__(self):
print "entering ..."
if self.cond1:
# self.connection1 = open(...)
print "opening connection1"
if self.cond2:
# self.connection1 = open(...)
print "opening connection2"
return self
def __exit__(self, exc_type, exc_value, traceback):
print "exiting ..."
if self.cond1:
# self.connection1.close()
print "closing connection1"
if self.cond2:
# self.connection2.close()
print "closing connection2"
with MultiConnectionContextManager(cond1, cond2) as handle:
if cond1:
# handle.connection1.read()
print "using handle.connection1"
if cond2:
# handle.connection2.read()
print "using handle.connection2"
for cond1 in (False, True):
for cond2 in (False, True):
print "=====", cond1, cond2
f(cond1, cond2)
You can call this directly to see the outcome. Replace the prints with your real statements for opening, using, and closing the connections.
Related
Can someone explain me the idea of generator and try except in this code:
from contextlib import contextmanager
#contextmanager
def file_open(path):
try:
f_obj = open(path, 'w')
yield f_obj
except OSError:
print("We had an error!")
finally:
print('Closing file')
f_obj.close()
if __name__ == '__main__':
with file_open('test.txt') as fobj:
fobj.write('Testing context managers')
As I know, finally is always executed regardless of correctness of the expression in try. So in my opinion this code should work like this: if we haven't exceptions, we open file, go to generator and the we go to finally block and return from the function. But I can't understand how generator works in this code. We used it only once and that's why we can't write all the text in the file. But I think my thoughts are incorrect. WHy?
So, one, your implementation is incorrect. You'll try to close the open file object even if it failed to open, which is a problem. What you need to do in this case is:
#contextmanager
def file_open(path):
try:
f_obj = open(path, 'w')
try:
yield f_obj
finally:
print('Closing file')
f_obj.close()
except OSError:
print("We had an error!")
or more simply:
#contextmanager
def file_open(path):
try:
with open(path, 'w') as f_obj:
yield f_obj
print('Closing file')
except OSError:
print("We had an error!")
To "how do generators in general work?" I'll refer you to the existing question on that topic. This specific case is complicated because using the #contextlib.contextmanager decorator repurposes generators for a largely unrelated purpose, using the fact that they innately pause in two cases:
On creation (until the first value is requested)
On each yield (when each subsequent value is requested)
to implement context management.
contextmanager just abuses this to make a class like this (actual source code is rather more complicated to cover edge cases):
class contextmanager:
def __init__(self, gen):
self.gen = gen # Receives generator in initial state
def __enter__(self):
return next(self.gen) # Advances to first yield, returning the value it yields
def __exit__(self, *args):
if args[0] is not None:
self.gen.throw(*args) # Plus some complicated handling to ensure it did the right thing
else:
try:
next(self.gen) # Check if it yielded more than once
except StopIteration:
pass # Expected to only yield once
else:
raise RuntimeError(...) # Oops, it yielded more than once, that's not supposed to happen
allowing the coroutine elements of generators to back a simpler way to write simple context managers.
Is there a way to stop a function from calling print?
I am using the pygame.joystick module for a game I am working on.
I created a pygame.joystick.Joystick object and in the actual loop of the game call its member function get_button to check for user input. The function does everything I need it to do, but the problem is that it also calls print, which slows down the game considerably.
Can I block this call to print?
Python lets you overwrite standard output (stdout) with any file object. This should work cross platform and write to the null device.
import sys, os
# Disable
def blockPrint():
sys.stdout = open(os.devnull, 'w')
# Restore
def enablePrint():
sys.stdout = sys.__stdout__
print 'This will print'
blockPrint()
print "This won't"
enablePrint()
print "This will too"
If you don't want that one function to print, call blockPrint() before it, and enablePrint() when you want it to continue. If you want to disable all printing, start blocking at the top of the file.
Use with
Based on #FakeRainBrigand solution I'm suggesting a safer solution:
import os, sys
class HiddenPrints:
def __enter__(self):
self._original_stdout = sys.stdout
sys.stdout = open(os.devnull, 'w')
def __exit__(self, exc_type, exc_val, exc_tb):
sys.stdout.close()
sys.stdout = self._original_stdout
Then you can use it like this:
with HiddenPrints():
print("This will not be printed")
print("This will be printed as before")
This is much safer because you can not forget to re-enable stdout, which is especially critical when handling exceptions.
Without with — Bad practice
The following example uses enable/disable prints functions that were suggested in previous answer.
Imagine that there is a code that may raise an exception. We had to use finally statement in order to enable prints in any case.
try:
disable_prints()
something_throwing()
enable_prints() # This will not help in case of exception
except ValueError as err:
handle_error(err)
finally:
enable_prints() # That's where it needs to go.
If you forgot the finally clause, none of your print calls would print anything anymore.
It is safer to use the with statement, which makes sure that prints will be reenabled.
Note: It is not safe to use sys.stdout = None, because someone could call methods like sys.stdout.write()
As #Alexander Chzhen suggested, using a context manager would be safer than calling a pair of state-changing functions.
However, you don't need to reimplement the context manager - it's already in the standard library. You can redirect stdout (the file object that print uses) with contextlib.redirect_stdout, and also stderr with contextlib.redirect_stderr.
import os
import contextlib
with open(os.devnull, "w") as f, contextlib.redirect_stdout(f):
print("This won't be printed.")
If you want to block print calls made by a particular function, there is a neater solution using decorators. Define the following decorator:
# decorater used to block function printing to the console
def blockPrinting(func):
def func_wrapper(*args, **kwargs):
# block all printing to the console
sys.stdout = open(os.devnull, 'w')
# call the method in question
value = func(*args, **kwargs)
# enable all printing to the console
sys.stdout = sys.__stdout__
# pass the return value of the method back
return value
return func_wrapper
Then just place #blockPrinting before any function. For example:
# This will print
def helloWorld():
print("Hello World!")
helloWorld()
# This will not print
#blockPrinting
def helloWorld2():
print("Hello World!")
helloWorld2()
If you are using Jupyter Notebook or Colab use this:
from IPython.utils import io
with io.capture_output() as captured:
print("I will not be printed.")
I have had the same problem, and I did not come to another solution but to redirect the output of the program (I don't know exactly whether the spamming happens on stdout or stderr) to /dev/null nirvana.
Indeed, it's open source, but I wasn't passionate enough to dive into the pygame sources - and the build process - to somehow stop the debug spam.
EDIT :
The pygame.joystick module has calls to printf in all functions that return the actual values to Python:
printf("SDL_JoystickGetButton value:%d:\n", value);
Unfortunately you would need to comment these out and recompile the whole thing. Maybe the provided setup.py would make this easier than I thought. You could try this...
A completely different approach would be redirecting at the command line. If you're on Windows, this means a batch script. On Linux, bash.
/full/path/to/my/game/game.py > /dev/null
C:\Full\Path\To\My\Game.exe > nul
Unless you're dealing with multiple processes, this should work. For Windows users this could be the shortcuts you're creating (start menu / desktop).
You can do a simple redirection, this seems a lot safer than messing with stdout, and doesn't pull in any additional libraries.
enable_print = print
disable_print = lambda *x, **y: None
print = disable_print
function_that_has_print_in_it(1) # nothing is printed
print = enable_print
function_that_has_print_in_it(2) # printing works again!
Note: this only works to disable the print() function, and would not disable all output if you're making calls to something else that is producing output. For instance if you were calling a C library that was producing it's own output to stdout, or if you were using intput().
No, there is not, especially that majority of PyGame is written in C.
But if this function calls print, then it's PyGame bug, and you should just report it.
The module I used printed to stderr. So the solution in that case would be:
sys.stdout = open(os.devnull, 'w')
"stop a function from calling print"
# import builtins
# import __builtin__ # python2, not test
printenabled = False
def decorator(func):
def new_func(*args,**kwargs):
if printenabled:
func("print:",*args,**kwargs)
return new_func
print = decorator(print) # current file
# builtins.print = decorator(builtins.print) # all files
# __builtin__.print = decorator(__builtin__.print) # python2
import sys
import xxxxx
def main():
global printenabled
printenabled = True
print("1 True");
printenabled = False
print("2 False");
printenabled = True
print("3 True");
printenabled = False
print("4 False");
if __name__ == '__main__':
sys.exit(main())
#output
print: 1 True
print: 3 True
https://stackoverflow.com/a/27622201
Change value of file object of print() function. By default it's sys.stdout, instead we can write to null device by open(os.devnull, 'w')
import os, sys
mode = 'debug' #'prod'
if mode == 'debug':
fileobj = sys.stdout
else:
fileobj = open(os.devnull,'w')
print('Hello Stackoverflow', file = fileobj)
Based on #Alexander Chzhen solution, I present here the way to apply it on a function with an option to suppress printing or not.
import os, sys
class SuppressPrints:
#different from Alexander`s answer
def __init__(self, suppress=True):
self.suppress = suppress
def __enter__(self):
if self.suppress:
self._original_stdout = sys.stdout
sys.stdout = open(os.devnull, 'w')
def __exit__(self, exc_type, exc_val, exc_tb):
if self.suppress:
sys.stdout.close()
sys.stdout = self._original_stdout
#implementation
def foo(suppress=True):
with SuppressPrints(suppress):
print("It will be printed, or not")
foo(True) #it will not be printed
foo(False) #it will be printed
I hope I can add my solution below answer of Alexander as a comment, but I don`t have enough (50) reputations to do so.
If you want to enable/disable print with a variable, you could call an auxiliary function instead print, something like printe(the name is just for convenience)
def printe(*what_to_print):
if prints_enable:
string = ""
for items in what_to_print:
string += str(items) + " "
print(string)
Define a new Print function where you enable print first. print your output next. And then disable print again.
def Print (*output):
enablePrint()
print (output)
disablePrint()
with one of the above "safe" enable / disable pair of function
I have a requirement to execute multiple Python statements and few of them might fail during execution, even after failing I want the rest of them to be executed.
Currently, I am doing:
try:
wx.StaticBox.Destroy()
wx.CheckBox.Disable()
wx.RadioButton.Enable()
except:
pass
If any one of the statements fails, except will get executed and program exits. But what I need is even though it is failed it should run all three statements.
How can I do this in Python?
Use a for loop over the methods you wish to call, eg:
for f in (wx.StaticBox.Destroy, wx.CheckBox.Disable, wx.RadioButton.Enable):
try:
f()
except Exception:
pass
Note that we're using except Exception here - that's generally much more likely what you want than a bare except.
If an exception occurs during a try block, the rest of the block is skipped. You should use three separate try clauses for your three separate statements.
Added in response to comment:
Since you apparently want to handle many statements, you could use a wrapper method to check for exceptions:
def mytry(functionname):
try:
functionname()
except Exception:
pass
Then call the method with the name of your function as input:
mytry(wx.StaticBox.Destroy)
I would recommend creating a context manager class that suppress any exception and the exceptions to be logged.
Please look at the code below. Would encourage any improvement to it.
import sys
class catch_exception:
def __init__(self, raising=True):
self.raising = raising
def __enter__(self):
pass
def __exit__(self, type, value, traceback):
if issubclass(type, Exception):
self.raising = False
print ("Type: ", type, " Log me to error log file")
return not self.raising
def staticBox_destroy():
print("staticBox_destroy")
raise TypeError("Passing through")
def checkbox_disable():
print("checkbox_disable")
raise ValueError("Passing through")
def radioButton_enable():
print("radioButton_enable")
raise ValueError("Passing through")
if __name__ == "__main__":
with catch_exception() as cm:
staticBox_destroy()
with catch_exception() as cm:
checkbox_disable()
with catch_exception() as cm:
radioButton_enable()
I am running into a bit of an issue with keeping a context manager open through function calls. Here is what I mean:
There is a context-manager defined in a module which I use to open SSH connections to network devices. The "setup" code handles opening the SSH sessions and handling any issues, and the teardown code deals with gracefully closing the SSH session. I normally use it as follows:
from manager import manager
def do_stuff(device):
with manager(device) as conn:
output = conn.send_command("show ip route")
#process output...
return processed_output
In order to keep the SSH session open and not have to re-establish it across function calls, I would like to do add an argument to "do_stuff" which can optionally return the SSH session along with the data returned from the SSH session, as follows:
def do_stuff(device, return_handle=False):
with manager(device) as conn:
output = conn.send_command("show ip route")
#process output...
if return_handle:
return (processed_output, conn)
else:
return processed_output
I would like to be able to call this function "do_stuff" from another function, as follows, such that it signals to "do_stuff" that the SSH handle should be returned along with the output.
def do_more_stuff(device):
data, conn = do_stuff(device, return_handle=True)
output = conn.send_command("show users")
#process output...
return processed_output
However the issue that I am running into is that the SSH session is closed, due to the do_stuff function "returning" and triggering the teardown code in the context-manager (which gracefully closes the SSH session).
I have tried converting "do_stuff" into a generator, such that its state is suspended and perhaps causing the context-manager to stay open:
def do_stuff(device, return_handle=False):
with manager(device) as conn:
output = conn.send_command("show ip route")
#process output...
if return_handle:
yield (processed_output, conn)
else:
yield processed_output
And calling it as such:
def do_more_stuff(device):
gen = do_stuff(device, return_handle=True)
data, conn = next(gen)
output = conn.send_command("show users")
#process output...
return processed_output
However this approach does not seem to be working in my case, as the context-manager gets closed, and I get back a closed socket.
Is there a better way to approach this problem? Maybe my generator needs some more work...I think using a generator to hold state is the most "obvious" way that comes to mind, but overall should I be looking into another way of keeping the session open across function calls?
Thanks
I found this question because I was looking for a solution to an analogous problem where the object I wanted to keep alive was a pyvirtualdisplay.display.Display instance with selenium.webdriver.Firefox instances in it.
I also wanted any opened resources to die if an exception were raised during the display/browser instance creations.
I imagine the same could be applied to your database connection.
I recognize this probably only a partial solution and contains less-than-best practices. Help is appreciated.
This answer is the result of an ad lib spike using the following resources to patch together my solution:
https://docs.python.org/3/library/contextlib.html#contextlib.ContextDecorator
http://www.wefearchange.org/2013/05/resource-management-in-python-33-or.html
(I do not yet fully grok what is described here though I appreciate the potential. The second link above eventually proved to be the most helpful by providing analogous situations.)
from pyvirtualdisplay.display import Display
from selenium.webdriver import Firefox
from contextlib import contextmanager, ExitStack
RFBPORT = 5904
def acquire_desktop_display(rfbport=RFBPORT):
display_kwargs = {'backend': 'xvnc', 'rfbport': rfbport}
display = Display(**display_kwargs)
return display
def release_desktop_display(self):
print("Stopping the display.")
# browsers apparently die with the display so no need to call quits on them
self.display.stop()
def check_desktop_display_ok(desktop_display):
print("Some checking going on here.")
return True
class XvncDesktopManager:
max_browser_count = 1
def __init__(self, check_desktop_display_ok=None, **kwargs):
self.rfbport = kwargs.get('rfbport', RFBPORT)
self.acquire_desktop_display = acquire_desktop_display
self.release_desktop_display = release_desktop_display
self.check_desktop_display_ok = check_desktop_display_ok \
if check_desktop_display_ok is None else check_desktop_display_ok
#contextmanager
def _cleanup_on_error(self):
with ExitStack() as stack:
"""push adds a context manager’s __exit__() method
to stack's callback stack."""
stack.push(self)
yield
# The validation check passed and didn't raise an exception
# Accordingly, we want to keep the resource, and pass it
# back to our caller
stack.pop_all()
def __enter__(self):
url = 'http://stackoverflow.com/questions/30905121/'\
'keeping-context-manager-object-alive-through-function-calls'
self.display = self.acquire_desktop_display(self.rfbport)
with ExitStack() as stack:
# add XvncDesktopManager instance's exit method to callback stack
stack.push(self)
self.display.start()
self.browser_resources = [
Firefox() for x in range(self.max_browser_count)
]
for browser_resource in self.browser_resources:
for url in (url, ):
browser_resource.get(url)
"""This is the last bit of magic.
ExitStacks have a .close() method which unwinds
all the registered context managers and callbacks
and invokes their exit functionality."""
# capture the function that calls all the exits
# will be called later outside the context in which it was captured
self.close_all = stack.pop_all().close
# if something fails in this context in enter, cleanup
with self._cleanup_on_error() as stack:
if not self.check_desktop_display_ok(self):
msg = "Failed validation for {!r}"
raise RuntimeError(msg.format(self.display))
# self is assigned to variable after "as",
# manually call close_all to unwind callback stack
return self
def __exit__(self, *exc_details):
# had to comment this out, unable to add this to callback stack
# self.release_desktop_display(self)
pass
I had a semi-expected result with the following:
kwargs = {
'rfbport': 5904,
}
_desktop_manager = XvncDesktopManager(check_desktop_display_ok=check_desktop_display_ok, **kwargs)
with ExitStack() as stack:
# context entered and what is inside the __enter__ method is executed
# desktop_manager will have an attribute "close_all" that can be called explicitly to unwind the callback stack
desktop_manager = stack.enter_context(_desktop_manager)
# I was able to manipulate the browsers inside of the display
# and outside of the context
# before calling desktop_manager.close_all()
browser, = desktop_manager.browser_resources
browser.get(url)
# close everything down when finished with resource
desktop_manager.close_all() # does nothing, not in callback stack
# this functioned as expected
desktop_manager.release_desktop_display(desktop_manager)
I'm looking to encapsulate logic for database transactions into a with block; wrapping the code in a transaction and handling various exceptions (locking issues). This is simple enough, however I'd like to also have the block encapsulate the retrying of the code block following certain exceptions. I can't see a way to package this up neatly into the context manager.
Is it possible to repeat the code within a with statement?
I'd like to use it as simply as this, which is really neat.
def do_work():
...
# This is ideal!
with transaction(retries=3):
# Atomic DB statements
...
...
I'm currently handling this with a decorator, but I'd prefer to offer the context manager (or in fact both), so I can choose to wrap a few lines of code in the with block instead of an inline function wrapped in a decorator, which is what I do at the moment:
def do_work():
...
# This is not ideal!
#transaction(retries=3)
def _perform_in_transaction():
# Atomic DB statements
...
_perform_in_transaction()
...
Is it possible to repeat the code within a with statement?
No.
As pointed out earlier in that mailing list thread, you can reduce a bit of duplication by making the decorator call the passed function:
def do_work():
...
# This is not ideal!
#transaction(retries=3)
def _perform_in_transaction():
# Atomic DB statements
...
# called implicitly
...
The way that occurs to me to do this is just to implement a standard database transaction context manager, but allow it to take a retries argument in the constructor. Then I'd just wrap that up in your method implementations. Something like this:
class transaction(object):
def __init__(self, retries=0):
self.retries = retries
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, traceback):
pass
# Implementation...
def execute(self, query):
err = None
for _ in range(self.retries):
try:
return self._cursor.execute(query)
except Exception as e:
err = e # probably ought to save all errors, but hey
raise err
with transaction(retries=3) as cursor:
cursor.execute('BLAH')
As decorators are just functions themselves, you could do the following:
with transaction(_perform_in_transaction, retries=3) as _perf:
_perf()
For the details, you'd need to implement transaction() as a factory method that returns an object with __callable__() set to call the original method and repeat it up to retries number of times on failure; __enter__() and __exit__() would be defined as normal for database transaction context managers.
You could alternatively set up transaction() such that it itself executes the passed method up to retries number of times, which would probably require about the same amount of work as implementing the context manager but would mean actual usage would be reduced to just transaction(_perform_in_transaction, retries=3) (which is, in fact, equivalent to the decorator example delnan provided).
While I agree it can't be done with a context manager... it can be done with two context managers!
The result is a little awkward, and I am not sure whether I approve of my own code yet, but this is what it looks like as the client:
with RetryManager(retries=3) as rm:
while rm:
with rm.protect:
print("Attempt #%d of %d" % (rm.attempt_count, rm.max_retries))
# Atomic DB statements
There is an explicit while loop still, and not one, but two, with statements, which leaves a little too much opportunity for mistakes for my liking.
Here's the code:
class RetryManager(object):
""" Context manager that counts attempts to run statements without
exceptions being raised.
- returns True when there should be more attempts
"""
class _RetryProtector(object):
""" Context manager that only raises exceptions if its parent
RetryManager has given up."""
def __init__(self, retry_manager):
self._retry_manager = retry_manager
def __enter__(self):
self._retry_manager._note_try()
return self
def __exit__(self, exc_type, exc_val, traceback):
if exc_type is None:
self._retry_manager._note_success()
else:
# This would be a good place to implement sleep between
# retries.
pass
# Suppress exception if the retry manager is still alive.
return self._retry_manager.is_still_trying()
def __init__(self, retries=1):
self.max_retries = retries
self.attempt_count = 0 # Note: 1-based.
self._success = False
self.protect = RetryManager._RetryProtector(self)
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, traceback):
pass
def _note_try(self):
self.attempt_count += 1
def _note_success(self):
self._success = True
def is_still_trying(self):
return not self._success and self.attempt_count < self.max_retries
def __bool__(self):
return self.is_still_trying()
Bonus: I know you don't want to separate your work off into separate functions wrapped with decorators... but if you were happy with that, the redo package from Mozilla offers the decorators to do that, so you don't have to roll your own. There is even a Context Manager that effective acts as temporary decorator for your function, but it still relies on your retrievable code to be factored out into a single function.
This question is a few years old but after reading the answers I decided to give this a shot.
This solution requires the use of a "helper" class, but I I think it does provide an interface with retries configured through a context manager.
class Client:
def _request(self):
# do request stuff
print("tried")
raise Exception()
def request(self):
retry = getattr(self, "_retry", None)
if not retry:
return self._request()
else:
for n in range(retry.tries):
try:
return self._request()
except Exception:
retry.attempts += 1
class Retry:
def __init__(self, client, tries=1):
self.client = client
self.tries = tries
self.attempts = 0
def __enter__(self):
self.client._retry = self
def __exit__(self, *exc):
print(f"Tried {self.attempts} times")
del self.client._retry
>>> client = Client()
>>> with Retry(client, tries=3):
... # will try 3 times
... response = client.request()
tried once
tried once
tried once
Tried 3 times