I come from a Twisted background, so I have a solid understanding of protocols and factories, as implemented by Twisted. However, I am in the midst of switching over to asyncio, and I'm having a bit of trouble understanding how factories integrate into this particular framework.
In the official documentation, we have an example of a server's asyncio.Protocol class definition. It does not have a user-defined __init__ function, so we can simply call loop.create_server(EchoServerClientProtocol, addr, port).
What happens if our Protocol needs to implement some initialization logic? For instance, consider this example which sets a maximum buffer size:
import asyncio
from collections import deque
class BufferedProtocolExample(asyncio.Protocol):
def __init__(self, buffsize=None):
self.queue = deque((), buffsize)
# ...
In Twisted, you'd create a Factory class to hold all of the configuration values, which you would then pass to the function initializing the connection. Asyncio seems to work in the same way, but I cannot find any documentation.
I could use functools.partial, but what is the correct way of handling this case?
The documentation has an example where they use a lambda for this, so my guess is that functools.partial is fine. They also state that protocol_factory can be any callable. So to have something like Twisted's Factorys, you'll just need to implement __call__ on a class the way you'd implement buildProtocol in Twisted.
Related
I've seen some example code for PySide slots that uses the #QtCore.Slot decorator, and some that does not. Testing it myself, it doesn't seem to make a difference. Is there a reason I should or should not use it? For example, in the following code:
import sys
from PySide import QtCore
# the next line seems to make no difference
#QtCore.Slot()
def a_slot(s):
print s
class SomeClass(QtCore.QObject):
happened = QtCore.Signal(str)
def __init__(self):
QtCore.QObject.__init__(self)
def do_signal(self):
self.happened.emit("Hi.")
sc = SomeClass()
sc.happened.connect(a_slot)
sc.do_signal()
the #QtCore.Slot decorator makes no difference; I can omit it, call #QtCore.Slot(str), or even #QtCore.Slot(int), and it still nicely says, "Hi."
The same seems to be true for PyQt's pyqtSlot.
This link explains the following about the pyqtSlot decorator:
Although PyQt4 allows any Python callable to be used as a slot when
connecting signals, it is sometimes necessary to explicitly mark a
Python method as being a Qt slot and to provide a C++ signature for
it. PyQt4 provides the pyqtSlot() function decorator to do this.
and
Connecting a signal to a decorated Python method also has the
advantage of reducing the amount of memory used and is slightly
faster.
Since the pyqtSlot decorator can take additional argument such as name, it allows different Python methods to handle the different signatures of a signal.
If you don't use the slot decorator, the signal connection mechanism has to manually work out all the type conversions to map from the underlying C++ function signatures to the Python functions. When the slot decorators are used, the type mapping can be explicit.
Austin has a good answer, and the answer I'm about to write is a bit outside the scope of your question, but it's something that has been confusing me and I imagine others will end up on this page wondering the same thing.
If you want to expose Python methods to JavaScript (using QTWebKit), then the #pyqtSlot decorator is mandatory. Undecorated methods are not exposed to JavaScript.
In a multithreaded environment, it may be mandatory not to use the pyside Slot decorator, because it can cause signals to go to the wrong thread. See
Derived classes receiving signals in wrong thread in PySide (Qt/PyQt)
https://bugreports.qt.io/browse/PYSIDE-249
I've recently been learning twisted, so I can integrate the framework into a pygames script. I've found there are alot of examples and tutorials that override the existing methods in twisted(please correct me if I'm mistaken).
In this simple client I have the twisted.protocols.basic.LineReceiver.lineReceived method being overriden when ever a line is sent
class ChatClientProtocol(LineReceiver):
def lineReceived(self,line):
print (line)
class ChatClient(ClientFactory):
def __init__(self):
self.protocol = ChatClientProtocol
reactor.connectTCP('192.168.1.2', 6000, ChatClient())
reactor.run()
Is the LineReceiver.lineReceived a listening socket at the address sent to the reactor.connctTCP? Would there be a way to do this without overriding the method? Or is this the paradigm of twisted(overriding is the way use twisted)?
LineReceiver.lineReceived is a method that gets called when a line is received. I don't know what you mean by asking if it's a "listening socket".
Overriding is the way that you receive lines using LineReceiver. Generally speaking, overriding or implementing callbacks for specific notifications is how you get called in Twisted, yes. How else would you want to do it?
In python multiprocessing module, in order to obtain an object from a remote Manager, most recipes tell us that we need to build a getter to recover each object:
class QueueMgr(multiprocessing.managers.SyncManager): pass
datos=Queue()
resultados=Queue()
topList=list(top)
QueueMgr.register('get_datos',callable=lambda:datos)
QueueMgr.register('get_resultados',callable=lambda:resultados)
QueueMgr.register('get_top',callable=lambda:topList)
def Cola_run():
queueMgr=QueueMgr(address=('172.2.0.1', 25555),authkey="foo")
queueMgr.get_server().serve_forever()
Cola=Thread(target=Cola_run)
Cola.daemon=True
Cola.start()
and than the same getter must be declared in the client program:
class QueueMgr(multiprocessing.managers.SyncManager): pass
QueueMgr.register('get_datos')
QueueMgr.register('get_resultados')
QueueMgr.register('get_top')
queueMgr=QueueMgr(address=('172.22.0.4', 25555),authkey="foo")
queueMgr.connect()
datos=queueMgr.get_datos()
resultados=queueMgr.get_resultados()
top=queueMgr.get_top()._getvalue()
Ok, it covers most usage cases. But I find the code looks ugly. Perhaps I am not getting the right recipe. But if it is really so, then at least I could do some nicer code in the client, perhaps automagically declaring the getters, if I were able to known in advance what objects the Manager is sharing. Is they a way to do it?
It is particularly troubling if you think that the instances of SyncManager provided by multiprocessing.Manager() allow to create sophisticated Proxy objects but that any client connecting to such SyncManager seems to need to obtain the reference to such proxies from elsewhere.
There's nothing stopping you from introspecting into the class and, for each shared attribute, generating the getter and calling register.
How can I handle Asyncore.dispatcher(s) and SimpleXMLRPCServer events from the same event-loop?
P.S. I already know that some of you might recommend Twisted for this, but the problem with Twisted is that it is a little bit too high-level library for my needs. In particular I am doing UDP flow-control by overriding Asyncore.dispatcher.writable() method that depends on timers. Not sure if/how this could be doable in Twisted.
You should use Twisted for this :-). You can't put SimpleXMLRPCServer into an asynchronous loop; it's synchronous code, which expects to block.
Flow-control with Twisted, even with UDP, is easy. Rather than overriding a method like writable(), your DatagramProtocol can call methods like stopReading / stopWriting / startReading / startWriting on their transport attribute. You can see these methods here.
(Note: this question is strictly about the design of the API, not about how to implement it; i.e. I only care about what the client of my API sees here, not what I have to do to make it work.)
In simple terms: I want to know the established pattern - if any - for explicit futures (aka promises, aka deferreds, aka tasks - names vary depending on the framework) in Python. Following is a more detailed description.
Consider a simple Python API like this:
def read_line():
...
s = read_line()
print(s)
This is a syncronous version - it will block if a line is not available yet. Suppose, now, that I want to provide a corresponding asynchronous (non-blocking) version that allows to register a callback to be invoked once the operation completes. E.g. a simple version could look like this:
def read_line_async(callback):
...
read_line_async(lambda s: print(s))
Now, in other languages and frameworks, there are often existing mandated or at least well-established patterns for such APIs. For example, in .NET prior to version 4, one would typically provide a pair of BeginReadLine/EndReadLine methods, and use the stock IAsyncResult interface to register callbacks and pass the resulting values. In .NET 4+, one uses System.Threading.Tasks, so as to enable all task combining operators (WhenAll etc), and to hook up into C# 5.0 async feature.
For another example, in JavaScript, there's nothing to cover this in the standard library, but jQuery has popularized the "deferred promise" interface that is now separately specified. So if I were to write async readLine in JS, I would name it readLineAsync, and implement then method on the returned value.
What, if any, is the established pattern in Python land? Looking through the standard library, I see several modules offering asynchronous APIs, but no consistent pattern between them, and nothing like a standardized protocol for "tasks" or "promises". Perhaps there is some pattern that can be derived from popular third-party libraries?
I've also seen the (oft-mentioned in this context) Deferred class in Twisted, but it seems to be overengineered for a general-purpose promise API, and rather adapted to the specific needs of this library. It doesn't look like something that I could easily clone an interface for (without taking a dependency on them) such that our promises would interoperate well if the client uses both libraries together in his application. Is there any other popular library or framework that has an explicitly designed API for this, that I could copy (and interoperate with) without taking a direct dependency?
Okay, so I have found PEP-3148, which does have a Future class. I cannot quite use it as is, so far as I can see, because proper instances are only created by Executor, and that is a class to convert existing synchronous APIs to asynchrony by e.g. moving the synchronous call to a background thread. However, I can replicate exactly the methods provided by Future objects - they match very closely what I would expect, i.e. the ability to (blocking) query for result, cancel, and add a callback.
Does this sound like a reasonable approach? Should it, perhaps, be accompanied with a proposal to add an abstract base class for the generic "future" concept to Python standard library, just like Python collections have their ABCs.
Read the various "server" libraries for hints.
A good example is BaseHTTPServer
Specifically, the HTTPServer class definition shows how a "handler class" is provided.
Each request instantiates an instance of the handler class. That object then handles the request.
If you want to write "asynchronous I/O" with a "callback", you'd provide a ReadHandler class to your reader.
class AsyncReadHandler( object ):
def input( self, line, server ):
print( line )
read_line_async( AsyncReadHandler )
Something like that would follow some established design patterns.
Have you looked at decorators yet?
from threading import Thread
def addCallback(function):
def result(parameters,callback):
# Run the function.
result = function(parameters)
# Run the callback asynchronously.
Thread(target=callback).start()
# Run the callback synchronously.
#callback()
# Return the value of the function.
return result
return result
# addCallback
def echo(value):
print value
def callback():
print 'Callback'
echo('Hello World!',callback)