Twisted does support TCP Keepalive. But I can't find a simple way to set those on endpoints (client and server).
What is the best current practice for doing this?
I can't see a way that you can achieve this from endpoints cleanly via the API. However take a look at the source to twisted.internet.endpoints._WrappingProtocol - you could possibly set your endpoint to use a _WrappingFactory* which callbacks a deferred when the connection is made. At this point transport is set on the protocol and you can call setTcpKeepAlive.
Given the underscore in the class name, I would say these are meant to be used internally and I wouldn't depend on their interface being consistent between releases. You should use them as a guide.
Alternatively, just call self.transport.setTcpKeepAlive in connectionMade of your Protocol and handle the case where this is not supported (i.e. where the protocol is used over another transport).
#!/usr/bin/python
# based on example at http://twistedmatrix.com/pipermail/twisted-python/2008-June/017836.html
from twisted.internet import protocol
from twisted.internet import reactor
class EchoProtocol(protocol.Protocol):
def connectionMade(self):
print "Client Connected Detected!"
### enable keepalive if supported
try:
self.transport.setTcpKeepAlive(1)
except AttributeError: pass
def connectionLost(self, reason):
print "Client Connection Lost!"
def dataReceived(self, data):
self.transport.write(data)
factory = protocol.Factory()
factory.protocol = EchoProtocol
reactor.listenTCP(8000, factory)
reactor.run()
For this simple example I feel that this gives a fairly clean solution, however there are probably situations where the additional wrapper code is warranted.
* Note that _WrappingFactory subclasses ClientFactory and may not be suitable for servers.
Related
I have subclassed asyncio.Protocol to create a TCP client that connects to some server.
I would like to separate the lower-level interface from the application, and create a layered architecture, but I'm unsure how to proceed.
I followed the example of the TCP Echo Client present on the official documentation and the way I start the client is also very similar:
loop = asyncio.get_event_loop()
coro = loop.create_connection(partial(MyClient, loop),
'127.0.0.1', 8888)
loop.run_until_complete(coro)
loop.run_forever()
loop.close()
However, in my protocol I created two methods (service access points, technically) that provide services to the "N+1" layer:
def setDataReceivedCallback(self, fun):
self.dataReceivedIndication = fun
def send(self, msg):
self.transport.write(msg)
The send method would be used by the N+1 layer to send a message to the server (request), while the setDataReceivedCallback would be used to register a callback that is called when data_received is called (so that the protocol can issue an indication to the N+1 layer that some data has arrived).
However, I am not sure how I can get hold of those entry points.
To be more clear:
create_connection needs a callable that returns a Protocol instance: so I won't be able to get hold of the instance at that points
loop seems not to expose any of the coroutines that it runs - furthermore, once I execute run_forever I lose the ability of getting hold of the loop itself
What am I missing here?
Lo and behold, the answer lies in the documentation.
From the section on Creating Connections:
Note protocol_factory can be any kind of callable, not necessarily a
class. For example, if you want to use a pre-created protocol
instance, you can pass lambda: my_protocol.
This, translated into code means the following:
loop = asyncio.get_event_loop()
ThisClient = MyClient(loop)
ThisClient.setDataReceivedCallback(whateverFunction)
# And you can also use and pass around ThisClient.send at this point
coro = loop.create_connection(lambda: ThisClient,
'127.0.0.1', 8888)
loop.run_until_complete(coro)
loop.run_forever()
loop.close()
I am completely new to Twisted, but it looks very promising for my project.
I would like to write a Python Twisted application which reads a record from text file every x seconds and contemporary listen on a TCP port (acting as TCP server). If no clients are connected to the TCP server the records are just discarded. If one or more clients are connected to the TCP server, the records are sent to all clients (all clients will receive the same line of the text file)
Can Twisted make this possible with a reasonable amount of LOCs?
Could anybody suggest an example to start with?
Thanks
C
Twisted's documentation includes information about how to run a TCP server. It also includes an information about how to perform work based on the passage of time. This should cover most of what you need to know.
Jean-Paul,
thanks for your answer.
Below is what i put together. The program is sending strings with time stamps to one or more clients connected to the server. Read synchronously from file in this scenario is very simple so i just use a fixed string with the time stamp.
My next step is to substitute the datetime.datetime.now() function call with a call to web service. Basically what i would like to create is kind of proxy that is
client versus a web service and invoke it every x seconds to get the data
TCP server versus a set of clients to stream data continuously, or better to say once a new data chunk is available (as is doing the example below)
The questions are:
Can you point me to an example of a similar system?
How can I combine the runEverySecond() method call with an asynchronous call to the web service using TCPClient capability of Twisted?
Thanks
C
from twisted.internet import protocol, reactor
from twisted.internet import task
import datetime
class Stream(protocol.Protocol):
def __init__(self, f):
self.factory = f
def connectionMade(self):
self.start = True
def forward(self, data):
if self.start:
self.transport.write(data)
class StreamFactory(protocol.Factory):
def __init__(self):
self.connections = []
def buildProtocol(self, addr):
s = Stream(self)
self.connections.append( s )
return s
def runEverySecond(self):
for c in self.connections:
c.forward( str(datetime.datetime.now()) )
f = StreamFactory()
l = task.LoopingCall(f.runEverySecond)
l.start(1.0) # call every second
reactor.listenTCP(8000, f)
reactor.run()
I've recently started using Python Twisted, and while its very complex I'm really liking it! I've tried searching for the answer to this but I keep coming up dry so I was hoping someone here is a twisted guru:
I have a large/complex distributed system setup in a hierarchical format with masters, slaves, subslaves, etc..
At several points in my code depending on the packet received, I have a need to send a packet of data to another node. The node the data needs to be sent to is not known before calling reactor.run() so I feel like the answer might be different. I would like the connection to be TCP for reliability, but it only needs to send one packet. Sometimes I need an ACK back and sometimes I don't, but after that the connection can always die. The current way I've been handling this is by keeping a reference to the reactor in my class that is required to send the packet and calling:
tmpConn = MyClientFactory(dataToSend)
self.reactor.connectTCP(ADDR, PORT, tmpConn)
I feel that this might present a few issues however:
What happens to garbage collection if I don't keep the reference to the tmpConn.
If I do keep a reference to it in my class it ends up being garbage anyway because it only needed to send one packet.
As I said there are many different Factories all doing things like this at the same time so I wonder if this is the best way to handle this situation. Any pointers are greatly appreciated.
Here is a code snippet so the question is more clear.
from twisted.internet import reactor
from twisted.internet.protocol import Protocol, Factory, ClientFactory
class OneShotProtocol(Protocol):
def __init__(self, addr, data):
self.myaddr = addr
self.mydata = data
def connectionMade(self):
# We know we have a connection here so send the data
self.transport.write(self.mydata)
# Now we can kill the connection
self.transport.loseConnection()
class OneShotFactory(ClientFactory):
def __init__(self, data):
self.mydata = data
def buildProtocol(self, addr):
return OneShotProtocol(addr, self.mydata)
class ListenProtocol(Protocol):
def __init__(self, addr, factory):
self.myaddr = addr
#NOTE: I only save this because I've read multiple reactors are possible
self.factory = factory
def dataReceived(self, data):
if(data == 'stuff'):
#Alert the other node!
tmpConn = OneShotFactory('The British are coming')
self.factory.reactor.connectTCP(ADDR, PORT, tmpConn)
# Moving on...
class ListenFactory(Factory):
def __init__(self, reactor):
self.reactor = reactor
def buildProtocol(self, addr):
return OneShotProtocol(addr, self)
l = ListenFactory(reactor)
reactor.listenTCP(PORT, l)
reactor.run()
This sounds like a great way to implement the behavior you want.
You don't have to worry very much about garbage collection of your factory. The reactor will keep a reference to it (you passed it to connectTCP, after all) for as long as it needs to and then forget about it. If you also forget about it then Python's garbage collector will clean it up for you before too long.
The only adjustment you might want to make is to use the cool new "endpoint" APIs instead of using connectTCP directory. This doesn't change the basic idea of the solution, it just gives you a little bit more flexibility that you might someday benefit from.
I have a Twisted project which seeks to essentially rebroadcast collected data over TCP in JSON. I essentially have a USB library which I need to subscribe to and synchronously read in a while loop indefinitely like so:
while True:
for line in usbDevice.streamData():
data = MyBrandSpankingNewUSBDeviceData(line)
# parse the data, convert to JSON
output = convertDataToJSON(data)
# broadcast the data
...
The problem, of course, is the .... Essentially, I need to start this process as soon as the server starts and end it when the server ends (Protocol.doStart and Protocol.doStop) and have it constantly running and broadcasting a output to every connected transport.
How can I do this in Twisted? Obviously, I'd need to have the while loop run in its own thread, but how can I "subscribe" clients to listen to output? It's also important that the USB data collection only be running once, as it could seriously mess things up to have it running more than once.
In a nutshell, here's my architecture:
Server has a USB hub which is streaming data all the time. Server is constantly subscribed to this USB hub and is constantly reading data.
Clients will come and go, connecting and disconnecting at will.
We want to send data to all connected clients whenever it is available. How can I do this in Twisted?
One thing you probably want to do is try to extend the common protocol/transport independence. Even though you need a thread with a long-running loop, you can hide this from the protocol. The benefit is the same as usual: the protocol becomes easier to test, and if you ever manage to have a non-threaded implementation of reading the USB events, you can just change the transport without changing the protocol.
from threading import Thread
class USBThingy(Thread):
def __init__(self, reactor, device, protocol):
self._reactor = reactor
self._device = device
self._protocol = protocol
def run(self):
while True:
for line in self._device.streamData():
self._reactor.callFromThread(self._protocol.usbStreamLineReceived, line)
The use of callFromThread is part of what makes this solution usable. It makes sure the usbStreamLineReceived method gets called in the reactor thread rather than in the thread that's reading from the USB device. So from the perspective of that protocol object, there's nothing special going on with respect to threading: it just has its method called once in a while when there's some data to process.
Your protocol then just needs to implement usbStreamLineReceived somehow, and implement your other application-specific logic, like keeping a list of observers:
class SomeUSBProtocol(object):
def __init__(self):
self.observers = []
def usbStreamLineReceived(self, line):
data = MyBrandSpankingNewUSBDeviceData(line)
# broadcast the data
for obs in self.observers[:]:
obs(output)
And then observers can register themselves with an instance of this class and do whatever they want with the data:
class USBObserverThing(Protocol):
def connectionMade(self):
self.factory.usbProto.observers.append(self.emit)
def connectionLost(self):
self.factory.usbProto.observers.remove(self.emit)
def emit(self, output):
# parse the data, convert to JSON
output = convertDataToJSON(data)
self.transport.write(output)
Hook it all together:
usbDevice = ...
usbProto = SomeUSBProtocol()
thingy = USBThingy(reactor, usbDevice, usbProto)
thingy.start()
factory = ServerFactory()
factory.protocol = USBObserverThing
factory.usbProto = usbProto
reactor.listenTCP(12345, factory)
reactor.run()
You can imagine a better observer register/unregister API (like one using actual methods instead of direct access to that list). You could also imagine giving the USBThingy a method for shutting down so SomeUSBProtocol could control when it stops running (so your process will actually be able to exit).
import socket
backlog = 1 #Number of queues
sk_1 = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sk_2 = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
local = {"port":1433}
internet = {"port":9999}
sk_1.bind (('', internet["port"]))
sk_1.listen(backlog)
sk_2.bind (('', local["port"]))
sk_2.listen(backlog)
Basically, I have this code. I am trying to listen on two ports: 1433 and 9999. But, this doesn't seems to work.
How can I listen on two ports, within the same python script??
The fancy-pants way to do this if you want to use Python std-lib would be to use SocketServer with the ThreadingMixin -- although the 'select' suggestion is probably the more efficient.
Even though we only define one ThreadedTCPRequestHandler you can easily repurpose it such that each listener has it's own unique handler and it should be fairly trivial to wrap the server/thread creation into a single method if thats the kind of thing you like.
#!/usr/bin/python
import threading
import time
import SocketServer
class ThreadedTCPRequestHandler(SocketServer.BaseRequestHandler):
def handle(self):
self.data = self.request.recv(1024).strip()
print "%s wrote: " % self.client_address[0]
print self.data
self.request.send(self.data.upper())
class ThreadedTCPServer(SocketServer.ThreadingMixIn, SocketServer.TCPServer):
pass
if __name__ == "__main__":
HOST = ''
PORT_A = 9999
PORT_B = 9876
server_A = ThreadedTCPServer((HOST, PORT_A), ThreadedTCPRequestHandler)
server_B = ThreadedTCPServer((HOST, PORT_B), ThreadedTCPRequestHandler)
server_A_thread = threading.Thread(target=server_A.serve_forever)
server_B_thread = threading.Thread(target=server_B.serve_forever)
server_A_thread.setDaemon(True)
server_B_thread.setDaemon(True)
server_A_thread.start()
server_B_thread.start()
while 1:
time.sleep(1)
The code so far is fine, as far as it goes (except that a backlog of 1 seems unduly strict), the problem of course comes when you try to accept a connection on either listening socket, since accept is normally a blocking call (and "polling" by trying to accept with short timeouts on either socket alternately will burn machine cycles to no good purpose).
select to the rescue!-) select.select (or on the better OSs select.poll or even select.epoll or select.kqueue... but, good old select.select works everywhere!-) will let you know which socket is ready and when, so you can accept appropriately. Along these lines, asyncore and asynchat provide a bit more organization (and third-party framework twisted, of course, adds a lot of such "asynchronous" functionality).
Alternatively, you can devote separate threads to servicing the two listening sockets, but in this case, if the different sockets' functionality needs to affect the same shared data structures, coordination (locking &c) may become ticklish. I would certainly recommend trying the async approach first -- it's actually simpler, as well as offering potential for substantially better performance!-)