I have an application that manages modules calls asynchronously:
it requests a deferred that triggers itself
appends custom callback
checks the returned code to see if = CONTINUE, otherwise handle errors
This is the code that returns a deferred to the main application:
def xxfi_connect(self, hostname):
d = defer.Deferred()
d.callback(Milter.ReturnCodes.CONTINUE)
return d
To asynchronously append some code, I need to hook up my function call in the deferred function like this:
d = defer.Deferred()
d.addCallback(self.run_mods, application.L_CONNECT)
d.callback(Milter.ReturnCodes.CONTINUE)
The trouble is that every function hooked up receive an argument containing the callback parameter (application.L_CONNECT).
Is it possible to achieve this without transporting the returncode in every function call ?
Ideally, I'd like my callback function to be like this:
def run_mods(self, level):
pass
instead of
def run_mods(self, code, level):
pass
because the code (Milter.ReturnCodes.CONTINUE) is only needed at the end of the chain
distinguishing Successful or Erred Deferred()s is already a feature built into them.
>>> from twisted.internet import defer
>>> d = defer.Deferred()
>>> def errors(*args): raise Exception("i'm a bad function")
>>> def sad(*arg): print "this is not so good", arg
>>> def happy(*arg): print "this is good", arg
>>> d.addCallback(errors)
<Deferred at 0x106821f38>
>>> d.addCallbacks(happy, sad)
<Deferred at 0x106821f38>
>>> d.callback("hope")
this is not so good (<twisted.python.failure.Failure <type 'exceptions.Exception'>>,)
Any given "stage" in a chain of callbacks can easily know if it's following an error state, either by how it was added, as the argument to addErrback(), or the second argument to addCallbacks(), or by virtue of it's argument being an instance of Failure()
For more about deferred chaining see: https://twistedmatrix.com/documents/14.0.1/core/howto/defer.html
Related
I am trying to use twisted framework for some async programming task recently. One thing I do not quite understand is how to wrap a function which takes a call back function and make it an function that return a deferred object?
For example if I have a function like below:
def registerCallbackForData(callback):
pass # this is a function that I do not control, some library code
And now the way I use it is to just register a callback. But I want to be able to incorporate this into the twisted framework, returning a deferred object probably and use reactor.run() later. Is this possible?
def convert_callback_to_deferred(f):
def g():
d = Deferred()
d.addCallback(callback)
f(d.callback)
return d
return g
from somelib import registerCallbackForData
getSomeDeferredForData = convert_callback_to_deferred(registerCallbackForData)
d = getSomeDeferredForData()
d.addCallback(...)
...
However, bear in mind that a Deferred can produce at most one result. If registerCallbackForData(cb) will result in cb being called more than once, there is nowhere for the 2nd and following calls' data to go. Only if you have an at-most-once event source does it make sense to convert it to the Deferred interface.
defer.maybeDeferred will wrap a blocking function call to return a deferred. If you want the call to be non-blocking you can instead use threads.deferToThread.
You can see the difference by switching which call is commented out:
import time
from twisted.internet import reactor, defer, threads
def foo():
print "going to sleep"
time.sleep(1)
print "woke up"
return "a result"
def show_result_and_stop_reactor(result):
print "got result: %s, stopping the reactor" % result
reactor.stop()
print "making the deferred"
d = defer.maybeDeferred(foo)
# d = threads.deferToThread(foo)
print "adding the callback"
d.addCallback(show_result_and_stop_reactor)
print "running the reactor"
reactor.run()
I have some Tornado's coroutine related problem.
There is some python-model A, which have the abbility to execute some function. The function could be set from outside of the model. I can't change the model itself, but I can pass any function I want. I'm trying to teach it to work with Tornado's ioloop through my function, but I couldn't.
Here is the snippet:
import functools
import pprint
from tornado import gen
from tornado import ioloop
class A:
f = None
def execute(self):
return self.f()
pass
#gen.coroutine
def genlist():
raise gen.Return(range(1, 10))
#gen.coroutine
def some_work():
a = A()
a.f = functools.partial(
ioloop.IOLoop.instance().run_sync,
lambda: genlist())
print "a.f set"
raise gen.Return(a)
#gen.coroutine
def main():
a = yield some_work()
retval = a.execute()
raise gen.Return(retval)
if __name__ == "__main__":
pprint.pprint(ioloop.IOLoop.current().run_sync(main))
So the thing is that I set the function in one part of code, but execute it in the other part with the method of the model.
Now, Tornado 4.2.1 gave me "IOLoop is already running" but in Tornado 3.1.1 it works (but I don't know how exactly).
I know next things:
I can create new ioloop but I would like to use existent ioloop.
I can wrap genlist with some function which knows that genlist's result is Future, but I don't know, how to block execution until future's result will be set inside of synchronous function.
Also, I can't use result of a.execute() as an future object because a.execute() could be called from other parts of the code, i.e. it should return list instance.
So, my question is: is there any opportunity to execute asynchronous genlist from the synchronous model's method using current IOLoop?
You cannot restart the outer IOLoop here. You have three options:
Use asynchronous interfaces everywhere: change a.execute() and everything up to the top of the stack into coroutines. This is the usual pattern for Tornado-based applications; trying to straddle the synchronous and asynchronous worlds is difficult and it's better to stay on one side or the other.
Use run_sync() on a temporary IOLoop. This is what Tornado's synchronous tornado.httpclient.HTTPClient does, which makes it safe to call from within another IOLoop. However, if you do it this way the outer IOLoop remains blocked, so you have gained nothing by making genlist asynchronous.
Run a.execute on a separate thread and call back to the main IOLoop's thread for the inner function. If a.execute cannot be made asynchronous, this is the only way to avoid blocking the IOLoop while it is running.
executor = concurrent.futures.ThreadPoolExecutor(8)
#gen.coroutine
def some_work():
a = A()
def adapter():
# Convert the thread-unsafe tornado.concurrent.Future
# to a thread-safe concurrent.futures.Future.
# Note that everything including chain_future must happen
# on the IOLoop thread.
future = concurrent.futures.Future()
ioloop.IOLoop.instance().add_callback(
lambda: tornado.concurrent.chain_future(
genlist(), future)
return future.result()
a.f = adapter
print "a.f set"
raise gen.Return(a)
#gen.coroutine
def main():
a = yield some_work()
retval = yield executor.submit(a.execute)
raise gen.Return(retval)
Say, your function looks something like this:
#gen.coroutine
def foo():
# does slow things
or
#concurrent.run_on_executor
def bar(i=1):
# does slow things
You can run foo() like so:
from tornado.ioloop import IOLoop
loop = IOLoop.current()
loop.run_sync(foo)
You can run bar(..), or any coroutine that takes args, like so:
from functools import partial
from tornado.ioloop import IOLoop
loop = IOLoop.current()
f = partial(bar, i=100)
loop.run_sync(f)
I am trying to use bottle.py to build some webpages. It seems like a major part of using bottle is learning to use decorators but I have read the python docs explanation of what decorators are but I am still not sure I understand them.
The docs say:
"A Python decorator is a specific change to the Python syntax that allows us to more conveniently alter functions and methods (and possibly classes in a future version)."
It sounds like you are calling a function with some changes made but I am not sure why you would do it this way or how to read the decorator.
Looking at some bottle code:
if __name__ == '__main__':
PROJECT_ROOT = os.path.abspath(os.path.dirname(__file__))
STATIC_ROOT = os.path.join(PROJECT_ROOT, 'static').replace('\\', '/')
HOST = os.environ.get('SERVER_HOST', 'localhost')
try:
PORT = int(os.environ.get('SERVER_PORT', '5555'))
except ValueError:
PORT = 5555
#bottle.route('/static/<filepath:path>')
def server_static(filepath):
"""Handler for static files, used with the development server.
When running under a production server such as IIS or Apache,
the server should be configured to serve the static files."""
return bottle.static_file(filepath, root=STATIC_ROOT)
# Starts a local test server.
bottle.run(server='wsgiref', host=HOST, port=PORT)
What does this line do #bottle.route('/static/<filepath:path>')?
If its a fancy function call then why do it this way rather than just calling the function?
Thanks for your help! :D
Check out this code:
def my_decorator(func):
return lambda: print("goodbye")
def greet():
print('hello')
result = my_decorator(greet)
result()
--output:--
goodbye
The following is a shortcut to accomplish the same thing:
def my_decorator(func):
return lambda: print("goodbye")
#my_decorator
def greet():
print('hello')
greet()
--output:--
goodbye
The #my_decorator syntax takes the function below it, greet, and makes this call:
greet = my_decorator(greet)
The my_decorator() function has to be defined so that:
It takes a function as an argument.
Returns a function.
A Python decorator is a specific change to the Python syntax that
allows us to more conveniently alter functions and methods (and
possibly classes in a future version).
Okay, so let's say that you want to add to whatever the greet() function does:
def my_decorator(func): # func = greet
def add_to_greet():
func() #<*********This is greet()
print('world') #<***This is additional stuff.
return add_to_greet
#my_decorator
def greet():
print('hello')
greet()
--output:--
hello
world
What does this line do #bottle.route('/static/<filepath:path>')
Okay, are you ready? If the #some_name syntax specifies an argument, for instance:
#wrapper('world')
def do_stuff():
First python will execute the following call:
#wrapper('world')
def do_stuff():
...
#****HERE:
decorator = wrapper('world') #decorator is a newly created variable
The wrapper() function must be defined to:
Take any old argument, e.g. 'world'
Return a function that:
Takes a function as an argument.
Returns a function.
Secondly, python will execute the call:
#wrapper('world')
def do_stuff():
...
decorator = wrapper('world')
#*****HERE:
do_stuff = decorator(do_stuff)
Whew! Here is an example:
def wrapper(extra_greeting):
def my_decorator(func):
def add_to_greet():
func()
print(extra_greeting)
return add_to_greet
return my_decorator
#wrapper('world')
def greet():
print('hello')
greet()
--output:--
hello
world
Now, let's analyze this decorator:
#bottle.route('/static/<filepath:path>')
def server_static(filepath):
bottle -- a module
route -- a function(or other callable) defined in the bottle module
'/static/<filepath:path>' -- a route
So the bottle module might look like this:
#bottle.py
def route(your_route): #your_route <= '/static/<filepath:path>'
def my_decorator(func): #The decorator syntax will cause python to call this function with server_static as the argument
def do_stuff(filepath):
func(filepath) #Call the server_static() function with the part of the url that matched filepath
return do_stuff #This function will be called when your code calls server_static(...)
return my_decorator
If its a fancy function call then why do it this way rather than just
calling the function?
Advanced stuff.
Comment: Perhaps you forgot to explain what specifically that route decorator does?
#route('/hello')
def hello():
return "Hello World!"
The route() decorator binds a piece of code to an URL path. In this
case, we link the /hello path to the hello() function. This is called
a route (hence the decorator name) and is the most important concept
of this framework. You can define as many routes as you want. Whenever
a browser requests a URL, the associated function is called and the
return value is sent back to the browser. It’s as simple as that.
http://bottlepy.org/docs/dev/tutorial.html
A path can include wild cards:
The simplest form of a wildcard consists of a name enclosed in angle
brackets (e.g. <name>)....Each wildcard matches one or more
characters, but stops at the first slash (/).
The rule /<action>/<item> matches as follows:
Path Result
/save/123 {'action': 'save', 'item': '123'}
/save/123/ No Match
/save/ No Match
//123 No Match
Filters are used to define more specific wildcards, and/or transform
the matched part of the URL before it is passed to the callback. A
filtered wildcard is declared as <name:filter>
The following standard filters are implemented:
:path matches all characters including the slash character in a non-greedy way and may be used to match more than one path
segment.
http://bottlepy.org/docs/dev/routing.html
A decorator is just a function wrapper, it takes some computable and surround it with more computables, technically a decorators is a function that returns a function(or an object, in fact there are decorator classes).
Lets say for example you want to make a logger decorator, this logger decorator will execute something and log who is executing it:
def loggger(name):
def wrapper(f):
def retF(*args, **kwargs):
print name, "is executing"
f(*args, **kwargs)
return retF
return wrapper
So, we have our decorator that will print "Daniel is executing" before call our desired function, for example
#logger("Daniel")
def add2Nums(a,b):
return a+b
>>> add2Nums(1,2)
>>> Daniel is executing
>>> 3
Bottle just works the same way, in the
#bottle.route('/static/<filepath:path>')
it just wrapps your server_static call so whenever some acces that route your function is called.
I have a scenario where I'm dynamically running functions at run-time and need to keep track of a "localized" scope. In the example below, "startScope" and "endScope" would actually be creating levels of "nesting" (in reality, the stuff contained in this localized scope isn't print statements...it's function calls that send data elsewhere and the nesting is tracked there. startScope / endScope just set control flags that are used to start / end the current nesting depth).
This all works fine for tracking the nested data, however, exceptions are another matter. Ideally, an exception would result in "falling out" of the current localized scope and not end the entire function (myFunction in the example below).
def startScope():
#Increment our control object's (not included in this example) nesting depth
control.incrementNestingDepth()
def endScope():
#Decrement our control object's (not included in this example) nesting depth
control.decrementNestingDepth()
def myFunction():
print "A"
print "B"
startScope()
print "C"
raise Exception
print "D"
print "This print statement and the previous one won't get printed"
endScope()
print "E"
def main():
try:
myFunction()
except:
print "Error!"
Running this would (theoretically) output the following:
>>> main()
A
B
C
Error!
E
>>>
I'm quite certain this isn't possible as I've written it above - I just wanted to paint a picture of the sort of end-result I'm trying to achieve.
Is something like this possible in Python?
Edit: A more relevant (albeit lengthy) example of how this is actually being used:
class Log(object):
"""
Log class
"""
def __init__(self):
#DataModel is defined elsewhere and contains a bunch of data structures / handles nested data / etc...
self.model = DataModel()
def Warning(self, text):
self.model.put("warning", text)
def ToDo(self, text):
self.model.put("todo", text)
def Info(self, text):
self.model.put("info", text)
def StartAdvanced(self):
self.model.put("startadvanced")
def EndAdvanced(self):
self.model.put("endadvanced")
def AddDataPoint(self, data):
self.model.put("data", data)
def StartTest(self):
self.model.put("starttest")
def EndTest(self):
self.model.put("endtest")
def Error(self, text):
self.model.put("error", text)
#myScript.py
from Logger import Log
def test_alpha():
"""
Crazy contrived example
In this example, there are 2 levels of nesting...everything up to StartAdvanced(),
and after EndAdvanced() is included in the top level...everything between the two is
contained in a separate level.
"""
Log.Warning("Better be careful here!")
Log.AddDataPoint(fancyMath()[0])
data = getSerialData()
if data:
Log.Info("Got data, let's continue with an advanced test...")
Log.StartAdvanced()
#NOTE: If something breaks in one of the following methods, then GOTO (***)
operateOnData(data)
doSomethingCrazy(data)
Log.ToDo("Fill in some more stuff here later...")
Log.AddDataPoint(data)
Log.EndAdvanced()
#(***) Ideally, we would resume here if an exception is raised in the above localized scope
Log.Info("All done! Log some data and wrap everything up!")
Log.AddDataPoint({"data": "blah"})
#Done
#framework.py
import inspect
from Logger import Log
class Framework(object):
def __init__(self):
print "Framework init!"
self.tests = []
def loadTests(self, file):
"""
Simplifying this for the sake of clarity
"""
for test in file:
self.tests.append(test)
def runTests(self):
"""
Simplifying this for the sake of clarity
"""
#test_alpha() as well as any other user tests will be run here
for test in self.tests:
Log.StartTest()
try:
test()
except Exception,e :
Log.Error(str(e))
Log.EndTest()
#End
You can achieve a similar effect with a context manager using a with statement. Here I use the contextlib.contextmanager decorator:
#contextlib.contextmanager
def swallower():
try:
yield
except ZeroDivisionError:
print("We stopped zero division error")
def foo():
print("This error will be trapped")
with swallower():
print("Here comes error")
1/0
print("This will never be reached")
print("Merrily on our way")
with swallower():
print("This error will propagate")
nonexistentName
print("This won't be reached")
>>> foo()
This error will be trapped
Here comes error
We stopped zero division error
Merrily on our way
This error will propagate
Traceback (most recent call last):
File "<pyshell#4>", line 1, in <module>
foo()
File "<pyshell#3>", line 10, in foo
nonexistentName
NameError: global name 'nonexistentName' is not defined
It cannot be done with an ordinary function call as in your example. In your example, the function startScope returns before the rest of the body of myFunction executes, so startScope can't have any effect on it. To handle exceptions, you need some kind of explicit structure (either a with statement or a regular try/except) inside myFunction; there's no way to make a simple function call magically intercept exceptions that are raised in its caller.
You should read up on context managers as they seem to fit what you're trying to do. The __enter__ and __exit__ methods of the context manager would correspond to your startScope and endScope. Whether it will do exactly what you want depends on exactly what you want those "manager" functions to do, but you will probably have more luck doing it with a context manager than trying to do it with simple function calls.
Am currently reading through the tutorial doc http://www.dabeaz.com/coroutines/Coroutines.pdf and got stuck at the (pure coroutine) multitask part, in particular the System call section.
The part that got me confused is
class Task(object):
taskid = 0
def __init__(self,target):
Task.taskid += 1
self.tid = Task.taskid # Task ID
self.target = target # Target coroutine
self.sendval = None # Value to send
def run(self):
return self.target.send(self.sendval)
def foo():
mytid = yield GetTid()
for i in xrange(5):
print "I'm foo", mytid
yield
class SystemCall(object):
def handle(self):
pass
class Scheduler(object):
def __init__(self):
self.ready = Queue()
self.taskmap = {}
def new(self, target):
newtask = Task(target)
self.taskmap[newtask.tid] = newtask
self.schedule(newtask)
return newtask.tid
def schedule(self, task):
self.ready.put(task)
def mainloop(self):
while self.taskmap:
task = self.ready.get()
try:
result = task.run()
if isinstance(result,SystemCall):
result.task = task
result.sched = self
result.handle()
continue
except StopIteration:
self.exit(task)
continue
self.schedule(task)
And the actual calling
sched = Scheduler()
sched.new(foo())
sched.mainloop()
The part I don't understand is that how the tid got assigned to mytid in foo()? In the order of things, it seems to be like (starting from sched.mainloop()). Please correct me if I get the flow wrong.
Assumptions: let's name some of the things
the_coroutine = foo()
scheduler = Scheduler
the_task = scheduler.new(the_coroutine) # assume .new() returns the task instead of tid
scheduler: .mainloop() is called
scheduler: the_task.run() is called
the_task: the_coroutine.send(None) is called
the_corou: yield GetTid(), and it returns an instance of GetTid to scheduler before the None in 3 is sent to the yield statement in loop. (am i right?)
the_corou: (also at the same time?) myTid is assigned as the instance of GetTid()?
scheduler: result = theTask.run() the_task.run() <an instance of GetTid>
scheduler: result is indeed an instance of SystemCall
scheduler: result.handle() is called
GetTid instance: scheduler.schedule(the_task)
scheduler: current iteration is done, start a new one
scheduler: (assume no other task in queue) the_task.run() is called
scheduler: the_coroutine.send() is called
I am lost?
When it reaches step 12, apparently the loop is already started and is able to print the tid whenever the scheduler runs the respective task. However, when exactly was the value of tid assigned to mytid in foo()? I am sure I missed something in the flow, but where (or even completely wrong)?
Then I notice the part where the Task object calls .send(), it returns a value, so .send() returns a value?
You seem to have omitted the Scheduler's new method, which is almost certainly where the assignment occurs. I imagine the result of GetTid() that is yielded is immediately sent back into the coroutine using .send().
Regarding .send(), yes you're correct, .send() returns a value.
Try the example below to see what's going on. .send assigns a value to the variable on the left side of the = yield, and execution of code within the coroutine resumes until it hits the next yield. At that point, the coroutine yields, and whatever it yields is the return value of the .send.
>>> def C():
... x = yield
... yield x**2
... print 'no more yields left'
...
>>> cor = C()
>>> cor.next()
>>> yielded = cor.send(10)
>>> print yielded
100
>>> cor.next()
no more yields left
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
Moving some of the in-comment points into the answer
So, how does x = yield y work?
When the coroutine hits that line of code, it yields the value y, then halts execution, waiting for someone to invoke its .send() method with an argument.
When someone does invoke .send() whatever is the argument in the .send is assigned to variable x, and the coroutine begins to execute code from there upto the point of its next yield statement.
Edit: Whoa boy it gets even more complex... I skimmed through this David Beazley presentation before, but to be honest I'm better acquainted with his other two talks on generators...
Going through the linked material, it looks like this definition of GetTid is what we're after.
class GetTid(SystemCall):
def handle(self):
self.task.sendval = self.task.tid
self.sched.schedule(self.task)
I quote from his presentation: "The operation of this is little subtle". haha.
Now look at the mainloop:
if isinstance(result,SystemCall):
result.task = task
result.sched = self
result.handle() # <- This bit!
continue
result here is the GetTid object, which runs its handle method which sets its' task's sendval attribute to the task's tid, and the schedules the task by putting it back in the queue.
Once the task is retrieved from the queue, the task.run() method is run again. Let's look at the Task object definition:
class Task(object):
...
def run(self):
return self.target.send(self.sendval)
When task.run() is invoked for this second time, it will send its sendval value (which was previously set by result.handle() to its tid) to its .target - the foo coroutine. This is where the foo coroutine object finally receives the value of its mytid.
The foo coroutine object runs until its next yield, printing its message along the way, and returns None (because there's nothing to the right of yield). That None is the return value of the task.run() method.
That is NOT an instance of a SystemCall, so the task is not handled/scheduled upon the second pass.
Other evil things probably happen too but that's the flow that I'm seeing for now.