I'm trying to switch MongoEngine with MotorEngine in my Tornado app for the sake of asynchronous DB access and so far I got nowhere.
query
#gen.coroutine
def get_all_users(self):
users = yield User.objects.find_all()
handler
class IUser(BaseHandler):
#asynchronous
#gen.engine
def get(self,userId=None, *args, **kwargs):
try:
userMethods = UserMethods()
sessionId = self.request.headers.get('sessionId')
ret = userMethods.get_all_users()
except Exception as ex:
print str(ex)
self.finish()
When I print ret variable it says <tornado.concurrent.Future object at 0x7fb0236fe450>. If I try to print ret.result() it gets me nowhere.
Any help is appreciated since I'm struggling with everything I guess...
get_all_users needs to return its value somehow. In Python 2.6 or 2.7, generators aren't allowed to use the "return" statement, so coroutines have a special "Return" exception:
#gen.coroutine
def get_all_users(self):
users = yield User.objects.find_all()
raise gen.Return(users)
In Python 3.3 and later, you can simply "return users".
Now in "get", calling "get_all_users" only gives you a pending Future, not a value. You must wait for the Future to resolve to a value by yielding it:
ret = yield userMethods.get_all_users()
For more about calling coroutines from coroutines, see my "Refactoring Tornado Coroutines".
By the way, you can decorate "get" with just "gen.coroutine", it's more modern than "asynchronous" and "gen.engine", but either style works.
Just a suggestion. If you want to avoid to create an instance of userMethods every time you use its method:
userMethods = UserMethods()
You can use the #classmethod decorator before declaring it:
class UserMethods():
pass
#classmethod
#tornado.gen.coroutine
def get_all_users(self):
users = yield User.objects.find_all()
raise gen.Return(users)
## class IUser
...
try:
# userMethods = UserMethods() --not necesary now--
sessionId = self.request.headers.get('sessionId')
ret = yield userMethods.get_all_users()
except Exception as ex:
print str(ex)
...
Related
I have a class that handles the API calls to a server. Certain methods within the class require the user to be logged in. Since it is possible for the session to run out, I need some functionality that re-logins the user once the session timed out. My idea was to use a decorator. If I try it like this
class Outer_Class():
class login_required():
def __init__(self, decorated_func):
self.decorated_func = decorated_func
def __call__(self, *args, **kwargs):
try:
response = self.decorated_func(*args, **kwargs)
except:
print('Session probably timed out. Logging in again ...')
args[0]._login()
response = self.decorated_func(*args, **kwargs)
return response
def __init__(self):
self.logged_in = False
self.url = 'something'
self._login()
def _login(self):
print(f'Logging in on {self.url}!')
self.logged_in = True
#this method requires the user to be logged in
#login_required
def do_something(self, param_1):
print('Doing something important with param_1')
if (): #..this fails
raise Exception()
I get an error. AttributeError: 'str' object has no attribute '_login'
Why do I not get a reference to the Outer_Class-instance handed over via *args? Is there another way to get a reference to the instance?
Found this answer How to get instance given a method of the instance? , but the decorated_function doesn't seem to have a reference to it's own instance.
It works fine, when Im using a decorator function outside of the class. This solves the problem, but I like to know, if it is possible to solve the this way.
The problem is that the magic of passing the object as the first hidden parameter only works for a non static method. As your decorator returns a custom callable object which is not a function, it never receives the calling object which is just lost in the call. So when you try to call the decorated function, you only pass it param_1 in the position of self. You get a first exception do_something() missing 1 required positional argument: 'param_1', fall into the except block and get your error.
You can still tie the decorator to the class, but it must be a function to have self magic work:
class Outer_Class():
def login_required(decorated_func):
def inner(self, *args, **kwargs):
print("decorated called")
try:
response = decorated_func(self, *args, **kwargs)
except:
print('Session probably timed out. Logging in again ...')
self._login()
response = decorated_func(self, *args, **kwargs)
return response
return inner
...
#this method requires the user to be logged in
#login_required
def do_something(self, param_1):
print('Doing something important with param_1', param_1)
if (False): #..this fails
raise Exception()
You can then successfully do:
>>> a = Outer_Class()
Logging in on something!
>>> a.do_something("foo")
decorated called
Doing something important with param_1
You have the command of
args[0]._login()
in the except. Since args[0] is a string and it doesn't have a _login method, you get the error message mentioned in the question.
I am working with a class in python that is part of a bigger program. The class is calling different methods.
If there is an error in one of the method I would like code to keep running after, but after the program is finished, I want to be able to see which methods had potential errors in them.
Below is roughly how I am structuring it at the moment, and this solution doesn't scale very well with more methods. Is there a better way to provide feedback (after the code has been fully run) as to which of the method had a potential error?
class Class():
def __init__(self):
try:
self.method_1()
except:
self.error_method1 = "Yes"
break
try:
self.method_2()
except:
self.error_method2 = "Yes"
break
try:
self.method_3()
except:
self.error_method3 = "Yes"
break
Although you could use sys.exc_info() to retrieve information about an Exception when one occurs as I mentioned in a comment, doing so may not be required since Python's standard try/expect mechanism seems adequate.
Below is a runnable example showing how to do so in order to provide "feedback" later about the execution of several methods of a class. This approach uses a decorator function, so should scale well since the same decorator can be applied to as many of the class' methods as desired.
from contextlib import contextmanager
from functools import wraps
import sys
from textwrap import indent
def provide_feedback(method):
""" Decorator to trap exceptions and add messages to feedback. """
#wraps(method)
def wrapped_method(self, *args, **kwargs):
try:
return method(self, *args, **kwargs)
except Exception as exc:
self._feedback.append(
'{!r} exception occurred in {}()'.format(exc, method.__qualname__))
return wrapped_method
class Class():
def __init__(self):
with self.feedback():
self.method_1()
self.method_2()
self.method_3()
#contextmanager
def feedback(self):
self._feedback = []
try:
yield
finally:
# Example of what could be done with any exception messages.
# They could instead be appended to some higher-level container.
if self._feedback:
print('Feedback:')
print(indent('\n'.join(self._feedback), ' '))
#provide_feedback
def method_1(self):
raise RuntimeError('bogus')
#provide_feedback
def method_2(self):
pass
#provide_feedback
def method_3(self):
raise StopIteration('Not enough foobar to go around')
inst = Class()
Output:
Feedback:
RuntimeError('bogus') exception occurred in Class.method_1()
StopIteration('Not enough foobar to go around') exception occurred in Class.method_3()
As we know, in threading, we have a concept call thread-safe.
An when I use tornado coroutine, I don't know whether the self of the RequestHandler coroutine safe of not.
Here is my code:
class IndexHandler(tornado.web.RequestHandler):
#tornado.gen.coroutine
def get(self):
self.write("Kingsoft API.")
self.abc = 2
yield self.gener()
self.write(self.k)
print self.k
self.write("Kingsoft API.")
return
#tornado.gen.coroutine
def gener(self):
http_client = AsyncHTTPClient()
self.k = str(int(time.time()*100000))
response = yield http_client.fetch('http://127.0.0.1:8000/')
Another question is, does my code would work expectantly?
Third other question is,
I only can use self to pass parameters and return values, but it's so ugly.
If I would love to use the AsyncHTTPClient inside some function but not in a callback way, do I have some other nice methods to do?
Your code is in a "critical section" between "yield" statements -- you cannot be interrupted unless you execute "yield". So you don't need to worry about accessing "self" or any other value in between yields.
Parameter passing works normally with coroutines, but to return a value (in Python 2) raise gen.Return:
class IndexHandler(tornado.web.RequestHandler):
#tornado.gen.coroutine
def get(self):
self.write("Kingsoft API.")
k = yield self.fn(2)
self.write(k)
#tornado.gen.coroutine
def fn(self, arg):
k = 2 * arg
raise tornado.gen.Return(k)
In Python 3.3+ a simple "return k" also works.
So I've got some code I want to test, and I'm encountering what looks like a pretty horrible side effect of the yield generator-based nature of #tornado.testing.gen_test's expected input tests:
class GameTest(tornado.testing.AsyncHTTPTestCase):
def new_game(self):
ws = yield websocket_connect('address')
ws.write_message('new_game')
response = yield ws.read_message()
# I want to say:
# return response
#tornado.testing.gen_test
def test_new_game(self):
response = self.new_game()
# do some testing
The problem is that I can't return a value from a generator, so my natural instinct here is wrong. Furthermore, I can't do this:
class GameTest(tornado.testing.AsyncHTTPTestCase):
def new_game(self):
ws = yield websocket_connect('address')
ws.write_message('new_game')
response = yield ws.read_message()
yield response, True
#tornado.testing.gen_test
def test_new_game(self):
for i in self.new_game():
if isinstance(i, tuple):
response, success = i
break
# do some testing
Because then I encounter the error:
AttributeError: 'NoneType' object has no attribute 'write_message'
Obviously, I can include the entire test generation code in the test, but that's really ugly, hard to maintain, etc. Does this testing pattern really make indirection so difficult?
You should use #gen.coroutine on asynchronous functions to be called by #gen_test methods, just like in non-test code. #gen_test is an adapter for your top-level test function that makes it possible to use asynchronous code in the synchronous unittest interface.
#gen.coroutine
def new_game(self):
ws = yield websocket_connect('address')
ws.write_message('new_game')
response = yield ws.read_message()
raise gen.Return(response)
#tornado.testing.gen_test
def test_new_game(self):
response = yield self.new_game()
# do some testing
In Python 3.3+, you can use return response instead of raise gen.Return(response). You can even omit the #gen.coroutine if you use yield from at the call site.
I'm looking to encapsulate logic for database transactions into a with block; wrapping the code in a transaction and handling various exceptions (locking issues). This is simple enough, however I'd like to also have the block encapsulate the retrying of the code block following certain exceptions. I can't see a way to package this up neatly into the context manager.
Is it possible to repeat the code within a with statement?
I'd like to use it as simply as this, which is really neat.
def do_work():
...
# This is ideal!
with transaction(retries=3):
# Atomic DB statements
...
...
I'm currently handling this with a decorator, but I'd prefer to offer the context manager (or in fact both), so I can choose to wrap a few lines of code in the with block instead of an inline function wrapped in a decorator, which is what I do at the moment:
def do_work():
...
# This is not ideal!
#transaction(retries=3)
def _perform_in_transaction():
# Atomic DB statements
...
_perform_in_transaction()
...
Is it possible to repeat the code within a with statement?
No.
As pointed out earlier in that mailing list thread, you can reduce a bit of duplication by making the decorator call the passed function:
def do_work():
...
# This is not ideal!
#transaction(retries=3)
def _perform_in_transaction():
# Atomic DB statements
...
# called implicitly
...
The way that occurs to me to do this is just to implement a standard database transaction context manager, but allow it to take a retries argument in the constructor. Then I'd just wrap that up in your method implementations. Something like this:
class transaction(object):
def __init__(self, retries=0):
self.retries = retries
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, traceback):
pass
# Implementation...
def execute(self, query):
err = None
for _ in range(self.retries):
try:
return self._cursor.execute(query)
except Exception as e:
err = e # probably ought to save all errors, but hey
raise err
with transaction(retries=3) as cursor:
cursor.execute('BLAH')
As decorators are just functions themselves, you could do the following:
with transaction(_perform_in_transaction, retries=3) as _perf:
_perf()
For the details, you'd need to implement transaction() as a factory method that returns an object with __callable__() set to call the original method and repeat it up to retries number of times on failure; __enter__() and __exit__() would be defined as normal for database transaction context managers.
You could alternatively set up transaction() such that it itself executes the passed method up to retries number of times, which would probably require about the same amount of work as implementing the context manager but would mean actual usage would be reduced to just transaction(_perform_in_transaction, retries=3) (which is, in fact, equivalent to the decorator example delnan provided).
While I agree it can't be done with a context manager... it can be done with two context managers!
The result is a little awkward, and I am not sure whether I approve of my own code yet, but this is what it looks like as the client:
with RetryManager(retries=3) as rm:
while rm:
with rm.protect:
print("Attempt #%d of %d" % (rm.attempt_count, rm.max_retries))
# Atomic DB statements
There is an explicit while loop still, and not one, but two, with statements, which leaves a little too much opportunity for mistakes for my liking.
Here's the code:
class RetryManager(object):
""" Context manager that counts attempts to run statements without
exceptions being raised.
- returns True when there should be more attempts
"""
class _RetryProtector(object):
""" Context manager that only raises exceptions if its parent
RetryManager has given up."""
def __init__(self, retry_manager):
self._retry_manager = retry_manager
def __enter__(self):
self._retry_manager._note_try()
return self
def __exit__(self, exc_type, exc_val, traceback):
if exc_type is None:
self._retry_manager._note_success()
else:
# This would be a good place to implement sleep between
# retries.
pass
# Suppress exception if the retry manager is still alive.
return self._retry_manager.is_still_trying()
def __init__(self, retries=1):
self.max_retries = retries
self.attempt_count = 0 # Note: 1-based.
self._success = False
self.protect = RetryManager._RetryProtector(self)
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, traceback):
pass
def _note_try(self):
self.attempt_count += 1
def _note_success(self):
self._success = True
def is_still_trying(self):
return not self._success and self.attempt_count < self.max_retries
def __bool__(self):
return self.is_still_trying()
Bonus: I know you don't want to separate your work off into separate functions wrapped with decorators... but if you were happy with that, the redo package from Mozilla offers the decorators to do that, so you don't have to roll your own. There is even a Context Manager that effective acts as temporary decorator for your function, but it still relies on your retrievable code to be factored out into a single function.
This question is a few years old but after reading the answers I decided to give this a shot.
This solution requires the use of a "helper" class, but I I think it does provide an interface with retries configured through a context manager.
class Client:
def _request(self):
# do request stuff
print("tried")
raise Exception()
def request(self):
retry = getattr(self, "_retry", None)
if not retry:
return self._request()
else:
for n in range(retry.tries):
try:
return self._request()
except Exception:
retry.attempts += 1
class Retry:
def __init__(self, client, tries=1):
self.client = client
self.tries = tries
self.attempts = 0
def __enter__(self):
self.client._retry = self
def __exit__(self, *exc):
print(f"Tried {self.attempts} times")
del self.client._retry
>>> client = Client()
>>> with Retry(client, tries=3):
... # will try 3 times
... response = client.request()
tried once
tried once
tried once
Tried 3 times