I'm attempting to create a database-driven DNS server (specifically to handle MX records only, and pass everything else upstream) in Twisted using Python 2.7. The code below works (in terms of getting a result), but is not operating asynchronously. Instead, any DNS requests coming in block the entire program from taking any other requests until the first one has been replied to. We need this to scale, and at the moment we can't figure out where we've gone wrong. If anyone has a working example to share, or sees the issue, we'd be eternally grateful.
import settings
import db
from twisted.names import dns, server, client, cache
from twisted.application import service, internet
from twisted.internet import defer
class DNSResolver(client.Resolver):
def __init__(self, servers):
client.Resolver.__init__(self, servers=servers)
#defer.inlineCallbacks
def _lookup_mx_records(self, hostname, timeout):
# Check the DB to see if we handle this domain.
mx_results = yield db.get_domain_mx_record_list(hostname)
if mx_results:
defer.returnValue(
[([dns.RRHeader(hostname, dns.MX, dns.IN, settings.DNS_TTL,
dns.Record_MX(priority, forward, settings.DNS_TTL))
for forward, priority in mx_results]),
(), ()])
# If the hostname isn't in the DB, we forward
# to our upstream DNS provider (8.8.8.8).
else:
i = yield self._lookup(hostname, dns.IN, dns.MX, timeout)
defer.returnValue(i)
def lookupMailExchange(self, name, timeout=None):
"""
The twisted function which is called when an MX record lookup is requested.
:param name: The domain name being queried for (e.g. example.org).
:param timeout: Time in seconds to wait for the query response. (optional, default: None)
:return: A DNS response for the record query.
"""
return self._lookup_mx_records(name, timeout)
# App name, UID, GID to run as. (root/root for port 53 bind)
application = service.Application('db_driven_dns', 1, 1)
# Set the secondary resolver
db_dns_resolver = DNSResolver(settings.DNS_NAMESERVERS)
# Create the protocol handlers
f = server.DNSServerFactory(caches=[cache.CacheResolver()], clients=[db_dns_resolver])
p = dns.DNSDatagramProtocol(f)
f.noisy = p.noisy = False
# Register as a tcp and udp service
ret = service.MultiService()
PORT=53
for (klass, arg) in [(internet.TCPServer, f), (internet.UDPServer, p)]:
s = klass(PORT, arg)
s.setServiceParent(ret)
# Run all of the above as a twistd application
ret.setServiceParent(service.IServiceCollection(application))
EDIT #1
blakev suggested that I might not be using the generator correctly (which is certainly possible). But if I simplify this down a little bit to not even use the DB, I still cannot process more than one DNS request at a time. To test this, I have stripped the class down. What follows is my entire, runnable, test file. Even in this highly stripped-down version of my server, Twisted does not accept any more requests until the first one has come in.
import sys
import logging
from twisted.names import dns, server, client, cache
from twisted.application import service, internet
from twisted.internet import defer
class DNSResolver(client.Resolver):
def __init__(self, servers):
client.Resolver.__init__(self, servers=servers)
def lookupMailExchange(self, name, timeout=None):
"""
The twisted function which is called when an MX record lookup is requested.
:param name: The domain name being queried for (e.g. example.org).
:param timeout: Time in seconds to wait for the query response. (optional, default: None)
:return: A DNS response for the record query.
"""
logging.critical("Query for " + name)
return defer.succeed([
(dns.RRHeader(name, dns.MX, dns.IN, 600,
dns.Record_MX(1, "10.0.0.9", 600)),), (), ()
])
# App name, UID, GID to run as. (root/root for port 53 bind)
application = service.Application('db_driven_dns', 1, 1)
# Set the secondary resolver
db_dns_resolver = DNSResolver( [("8.8.8.8", 53), ("8.8.4.4", 53)] )
# Create the protocol handlers
f = server.DNSServerFactory(caches=[cache.CacheResolver()], clients=[db_dns_resolver])
p = dns.DNSDatagramProtocol(f)
f.noisy = p.noisy = False
# Register as a tcp and udp service
ret = service.MultiService()
PORT=53
for (klass, arg) in [(internet.TCPServer, f), (internet.UDPServer, p)]:
s = klass(PORT, arg)
s.setServiceParent(ret)
# Run all of the above as a twistd application
ret.setServiceParent(service.IServiceCollection(application))
# If called directly, instruct the user to run it through twistd
if __name__ == '__main__':
print "Usage: sudo twistd -y %s (background) OR sudo twistd -noy %s (foreground)" % (sys.argv[0], sys.argv[0])
Matt,
I tried your latest example and it works fine. I think you may be testing it wrong.
In your later comments you talk about using time.sleep(5) in the lookup method to simulate a slow response.
You can't do that. It will block the reactor. If you want to simulate a delay, use reactor.callLater to fire the deferred
https://twistedmatrix.com/documents/current/api/twisted.internet.interfaces.IReactorTime.html#callLater
eg
def lookupMailExchange(self, name, timeout=None):
d = defer.Deferred()
self._reactor.callLater(
5, d.callback,
[(dns.RRHeader(name, dns.MX, dns.IN, 600,
dns.Record_MX(1, "mail.example.com", 600)),), (), ()]
)
return d
Here's how I tested:
time bash -c 'for n in "google.com" "yahoo.com"; do dig -p 10053 #127.0.0.1 "$n" MX +short +tries=1 +notcp +time=10 & done; wait'
And the output shows that both responses came back after 5 seconds
1 10.0.0.9.
1 10.0.0.9.
real 0m5.019s
user 0m0.015s
sys 0m0.013s
Similarly, you need to make sure that calls to your database don't block:
https://twistedmatrix.com/documents/current/core/howto/rdbms.html
Some other points:
MX records should contain hostnames not IP addresses - https://www.rfc-editor.org/rfc/rfc1035#section-3.3.9
Don't use nslookup -
https://jdebp.eu./FGA/nslookup-flaws.html and http://cr.yp.to/djbdns/nslookup.html
Be careful that stdlib logging doesn't block -
https://twistedmatrix.com/documents/current/api/twisted.python.log.PythonLoggingObserver.html
Configure your test client to only issue one UDP query, otherwise retries and followup TCP queries may confuse your tests.
Related
I want to make a little update script for a software that runs on a Raspberry Pi and works like a local server. That should connect to a master server in the web to get software updates and also to verify the license of the software.
For that I set up two python scripts. I want these to connect via a TLS socket. Then the client checks the server certificate and the server checks if it's one of the authorized clients. I found a solution for this using twisted on this page.
Now there is a problem left. I want to know which client (depending on the certificate) is establishing the connection. Is there a way to do this in Python 3 with twisted?
I'm happy with every answer.
In a word: yes, this is quite possible, and all the necessary stuff is
ported to python 3 - I tested all the following under Python 3.4 on my Mac and it seems to
work fine.
The short answer is
"use twisted.internet.ssl.Certificate.peerFromTransport"
but given that a lot of set-up is required to get to the point where that is
possible, I've constructed a fully working example that you should be able to
try out and build upon.
For posterity, you'll first need to generate a few client certificates all
signed by the same CA. You've probably already done this, but so others can
understand the answer and try it out on their own (and so I could test my
answer myself ;-)), they'll need some code like this:
# newcert.py
from twisted.python.filepath import FilePath
from twisted.internet.ssl import PrivateCertificate, KeyPair, DN
def getCAPrivateCert():
privatePath = FilePath(b"ca-private-cert.pem")
if privatePath.exists():
return PrivateCertificate.loadPEM(privatePath.getContent())
else:
caKey = KeyPair.generate(size=4096)
caCert = caKey.selfSignedCert(1, CN="the-authority")
privatePath.setContent(caCert.dumpPEM())
return caCert
def clientCertFor(name):
signingCert = getCAPrivateCert()
clientKey = KeyPair.generate(size=4096)
csr = clientKey.requestObject(DN(CN=name), "sha1")
clientCert = signingCert.signRequestObject(
csr, serialNumber=1, digestAlgorithm="sha1")
return PrivateCertificate.fromCertificateAndKeyPair(clientCert, clientKey)
if __name__ == '__main__':
import sys
name = sys.argv[1]
pem = clientCertFor(name.encode("utf-8")).dumpPEM()
FilePath(name.encode("utf-8") + b".client.private.pem").setContent(pem)
With this program, you can create a few certificates like so:
$ python newcert.py a
$ python newcert.py b
Now you should have a few files you can use:
$ ls -1 *.pem
a.client.private.pem
b.client.private.pem
ca-private-cert.pem
Then you'll want a client which uses one of these certificates, and sends some
data:
# tlsclient.py
from twisted.python.filepath import FilePath
from twisted.internet.endpoints import SSL4ClientEndpoint
from twisted.internet.ssl import (
PrivateCertificate, Certificate, optionsForClientTLS)
from twisted.internet.defer import Deferred, inlineCallbacks
from twisted.internet.task import react
from twisted.internet.protocol import Protocol, Factory
class SendAnyData(Protocol):
def connectionMade(self):
self.deferred = Deferred()
self.transport.write(b"HELLO\r\n")
def connectionLost(self, reason):
self.deferred.callback(None)
#inlineCallbacks
def main(reactor, name):
pem = FilePath(name.encode("utf-8") + b".client.private.pem").getContent()
caPem = FilePath(b"ca-private-cert.pem").getContent()
clientEndpoint = SSL4ClientEndpoint(
reactor, u"localhost", 4321,
optionsForClientTLS(u"the-authority", Certificate.loadPEM(caPem),
PrivateCertificate.loadPEM(pem)),
)
proto = yield clientEndpoint.connect(Factory.forProtocol(SendAnyData))
yield proto.deferred
import sys
react(main, sys.argv[1:])
And finally, a server which can distinguish between them:
# whichclient.py
from twisted.python.filepath import FilePath
from twisted.internet.endpoints import SSL4ServerEndpoint
from twisted.internet.ssl import PrivateCertificate, Certificate
from twisted.internet.defer import Deferred
from twisted.internet.task import react
from twisted.internet.protocol import Protocol, Factory
class ReportWhichClient(Protocol):
def dataReceived(self, data):
peerCertificate = Certificate.peerFromTransport(self.transport)
print(peerCertificate.getSubject().commonName.decode('utf-8'))
self.transport.loseConnection()
def main(reactor):
pemBytes = FilePath(b"ca-private-cert.pem").getContent()
certificateAuthority = Certificate.loadPEM(pemBytes)
myCertificate = PrivateCertificate.loadPEM(pemBytes)
serverEndpoint = SSL4ServerEndpoint(
reactor, 4321, myCertificate.options(certificateAuthority)
)
serverEndpoint.listen(Factory.forProtocol(ReportWhichClient))
return Deferred()
react(main, [])
For simplicity's sake we'll just re-use the CA's own certificate for the
server, but in a more realistic scenario you'd obviously want a more
appropriate certificate.
You can now run whichclient.py in one window, then python tlsclient.py a;
python tlsclient.py b in another window, and see whichclient.py print out
a and then b respectively, identifying the clients by the commonName
field in their certificate's subject.
The one caveat here is that you might initially want to put that call to
Certificate.peerFromTransport into a connectionMade method; that won't
work.
Twisted does not presently have a callback for "TLS handshake complete";
hopefully it will eventually, but until it does, you have to wait until you've
received some authenticated data from the peer to be sure the handshake has
completed. For almost all applications, this is fine, since by the time you
have received instructions to do anything (download updates, in your case) the
peer must already have sent the certificate.
I have what I would think is a pretty common use case for Gevent. I need a UDP server that listens for requests, and based on the request submits a POST to an external web service. The external web service essentially only allows one request at a time.
I would like to have an asynchronous UDP server so that data can be immediately retrieved and stored so that I don't miss any requests (this part is easy with the DatagramServer gevent provides). Then I need some way to send requests to the external web service serially, but in such a way that it doesn't ruin the async of the UDP server.
I first tried monkey patching everything and what I ended up with was a quick solution, but one in which my requests to the external web service were not rate limited in any way and which resulted in errors.
It seems like what I need is a single non-blocking worker to send requests to the external web service in serial while the UDP server adds tasks to the queue from which the non-blocking worker is working.
What I need is information on running a gevent server with additional greenlets for other tasks (especially with a queue). I've been using the serve_forever function of the DatagramServer and think that I'll need to use the start method instead, but haven't found much information on how it would fit together.
Thanks,
EDIT
The answer worked very well. I've adapted the UDP server example code with the answer from #mguijarr to produce a working example for my use case:
from __future__ import print_function
from gevent.server import DatagramServer
import gevent.queue
import gevent.monkey
import urllib
gevent.monkey.patch_all()
n = 0
def process_request(q):
while True:
request = q.get()
print(request)
print(urllib.urlopen('https://test.com').read())
class EchoServer(DatagramServer):
__q = gevent.queue.Queue()
__request_processing_greenlet = gevent.spawn(process_request, __q)
def handle(self, data, address):
print('%s: got %r' % (address[0], data))
global n
n += 1
print(n)
self.__q.put(n)
self.socket.sendto('Received %s bytes' % len(data), address)
if __name__ == '__main__':
print('Receiving datagrams on :9000')
EchoServer(':9000').serve_forever()
Here is how I would do it:
Write a function taking a "queue" object as argument; this function will continuously process items from the queue. Each item is supposed to be a request for the web service.
This function could be a module-level function, not part of your DatagramServer instance:
def process_requests(q):
while True:
request = q.get()
# do your magic with 'request'
...
in your DatagramServer, make the function running within a greenlet (like a background task):
self.__q = gevent.queue.Queue()
self.__request_processing_greenlet = gevent.spawn(process_requests, self.__q)
when you receive the UDP request in your DatagramServer instance, you push the request to the queue
self.__q.put(request)
This should do what you want. You still call 'serve_forever' on DatagramServer, no problem.
I am learning how to use Twisted AMP. I am developing a program that sends data from a client to a server and inserts the data in a SQLite3 DB. The server then sends back a result to the client which indicates success or error (try and except might not be the best way to do this but it is only a temporary solution while I work out the main problem). In order to do this I modified an example I found that originally did a sum and returned the result, so I realize that this might not be the most efficient way to do what I am trying to do. In particular I am trying to do some timings on multiple insertions (i.e. send the data to the server multiple times for multiple insertions) and I have included the code I have written. It works but clearly it is not a good way to send multiple data for insertion since I am performing multiple connections before running the reactor.
I have tried several ways to get around this including passing the ClientCreator to reactor.callWhenRunning() but you cannot do this with a deferred.
Any suggestions, advice or help with how to do this would be much appreciated. Here is the code.
Server:
from twisted.protocols import amp
from twisted.internet import reactor
from twisted.internet.protocol import Factory
import sqlite3, time
class Insert(amp.Command):
arguments = [('data', amp.Integer())]
response = [('insert_result', amp.Integer())]
class Protocol(amp.AMP):
def __init__(self):
self.conn = sqlite3.connect('biomed1.db')
self.c =self.conn.cursor()
self.res=None
#Insert.responder
def dbInsert(self, data):
self.InsertDB(data) #call the DB inserter
result=self.res # send back the result of the insertion
return {'insert_result': result}
def InsertDB(self,data):
tm=time.time()
print "insert time:",tm
chx=data
PID=2
device_ID=5
try:
self.c.execute("INSERT INTO btdata4(co2_data, patient_Id, sensor_Id) VALUES ('%s','%s','%s')" % (chx, PID, device_ID))
except Exception, err:
print err
self.res=0
else:
self.res=1
self.conn.commit()
pf = Factory()
pf.protocol = Protocol
reactor.listenTCP(1234, pf)
reactor.run()
Client:
from twisted.internet import reactor
from twisted.internet.protocol import ClientCreator
from twisted.protocols import amp
import time
class Insert(amp.Command):
arguments = [('data', amp.Integer())]
response = [('insert_result', amp.Integer())]
def connected(protocol):
return protocol.callRemote(Insert, data=5555).addCallback(gotResult)
def gotResult(result):
print 'insert_result:', result['insert_result']
tm=time.time()
print "stop", tm
def error(reason):
print "error", reason
tm=time.time()
print "start",tm
for i in range (10): #send data over ten times
ClientCreator(reactor, amp.AMP).connectTCP(
'127.0.0.1', 1234).addCallback(connected).addErrback(error)
reactor.run()
End of Code.
Thank you.
Few things which will improve your Server code.
First and foremost: The use of direct database access functions is discouraged in twisted, as they normally causes block. Twisted has nice abstraction for database access which provides twisted approach to db connection - twisted.adbapi
Now on to reuse of db connection: If you want to reuse certain assets (like database connection) across a number of Protocol instances, you should initialize those in constructor of Factory or if you dont fancy initiating such things at a launch time, create an resource access method, which will initiate resource upon first method call then assign it to class variable and return that on subsequent calls.
When Factory creates a specific Protocol intance, it will add a reference to itself inside the protocol, see line 97 of twisted.internet.protocol
Then within your Protocol instance, you can access shared database connection instance like:
self.factory.whatever_name_for_db_connection.doSomething()
Reworked Server code (I dont have python, twisted or even decent IDE available, so this is pretty much untested, some errors are to be expected)
from twisted.protocols import amp
from twisted.internet import reactor
from twisted.internet.protocol import Factory
import time
class AMPDBAccessProtocolFactory(Factory):
def getDBConnection(self):
if 'dbConnection' in dir(self):
return self.dbConnection
else:
self.dbConnection = SQLLiteTestConnection(self.dbURL)
return self.dbConnection
class SQLLiteTestConnection(object):
"""
Provides abstraction for database access and some business functions.
"""
def __init__(self,dbURL):
self.dbPool = adbapi.ConnectionPool("sqlite3" , dbURL, check_same_thread=False)
def insertBTData4(self,data):
query = "INSERT INTO btdata4(co2_data, patient_Id, sensor_Id) VALUES (%s,%s,%s)"
tm=time.time()
print "insert time:",tm
chx=data
PID=2
device_ID=5
dF = self.dbPool.runQuery(query,(chx, PID, device_ID))
dF.addCallback(self.onQuerySuccess,insert_data=data)
return dF
def onQuerySuccess(self,insert_data,*r):
"""
Here you can inspect query results or add any other valuable information to be parsed at client.
For the test sake we will just return True to a customer if query was a success.
original data available at kw argument insert_data
"""
return True
class Insert(amp.Command):
arguments = [('data', amp.Integer())]
response = [('insert_result', amp.Integer())]
class MyAMPProtocol(amp.AMP):
#Insert.responder
def dbInsert(self, data):
db = self.factory.getDBConnection()
dF = db.insertBTData4(data)
dF.addErrback(self.onInsertError,data)
return dF
def onInsertError(self, error, data):
"""
Here you could do some additional error checking or inspect data
which was handed for insert here. For now we will just throw the same exception again
so that the client gets notified
"""
raise error
if __name__=='__main__':
pf = AMPDBAccessProtocolFactory()
pf.protocol = MyAMPProtocol
pf.dbURL='biomed1.db'
reactor.listenTCP(1234, pf)
reactor.run()
Now on to the client. IF AMP follows the overall RPC logic (cant test it currently) it should be able to peruse the same connection across a number of calls. So I have created a ServerProxy class which will hold that perusable protocol instance and provide abstraction for calls:
from twisted.internet import reactor
from twisted.internet.protocol import ClientCreator
from twisted.protocols import amp
import time
class Insert(amp.Command):
arguments = [('data', amp.Integer())]
response = [('insert_result', amp.Integer())]
class ServerProxy(object):
def connected(self,protocol):
self.serverProxy = protocol # assign protocol as instance variable
reactor.callLater(5,self.startMultipleInsert) #after five seconds start multiple insert procedure
def remote_insert(self,data):
return self.serverProxy.callRemote(Insert, data)
def startMultipleInsert(self):
for i in range (10): #send data over ten times
dF = self.remote_insert(i)
dF.addCallback(self.gotInsertResult)
dF.addErrback(error)
def gotInsertResult(self,result):
print 'insert_result:', str(result)
tm=time.time()
print "stop", tm
def error(reason):
print "error", reason
def main():
tm=time.time()
print "start",tm
serverProxy = ServerProxy()
ClientCreator(reactor, amp.AMP).connectTCP('127.0.0.1', 1234).addCallback(serverProxy.connected).addErrback(error)
reactor.run()
if __name__=='__main__':
main()
So i've looked around at a few things involving writting an HTTP Proxy using python and the Twisted framework.
Essentially, like some other questions, I'd like to be able to modify the data that will be sent back to the browser. That is, the browser requests a resource and the proxy will fetch it. Before the resource is returned to the browser, i'd like to be able to modify ANY (HTTP headers AND content) content.
This ( Need help writing a twisted proxy ) was what I initially found. I tried it out, but it didn't work for me. I also found this ( Python Twisted proxy - how to intercept packets ) which i thought would work, however I can only see the HTTP requests from the browser.
I am looking for any advice. Some thoughts I have are to use the ProxyClient and ProxyRequest classes and override the functions, but I read that the Proxy class itself is a combination of the both.
For those who may ask to see some code, it should be noted that I have worked with only the above two examples. Any help is great.
Thanks.
To create ProxyFactory that can modify server response headers, content you could override ProxyClient.handle*() methods:
from twisted.python import log
from twisted.web import http, proxy
class ProxyClient(proxy.ProxyClient):
"""Mangle returned header, content here.
Use `self.father` methods to modify request directly.
"""
def handleHeader(self, key, value):
# change response header here
log.msg("Header: %s: %s" % (key, value))
proxy.ProxyClient.handleHeader(self, key, value)
def handleResponsePart(self, buffer):
# change response part here
log.msg("Content: %s" % (buffer[:50],))
# make all content upper case
proxy.ProxyClient.handleResponsePart(self, buffer.upper())
class ProxyClientFactory(proxy.ProxyClientFactory):
protocol = ProxyClient
class ProxyRequest(proxy.ProxyRequest):
protocols = dict(http=ProxyClientFactory)
class Proxy(proxy.Proxy):
requestFactory = ProxyRequest
class ProxyFactory(http.HTTPFactory):
protocol = Proxy
I've got this solution by looking at the source of twisted.web.proxy. I don't know how idiomatic it is.
To run it as a script or via twistd, add at the end:
portstr = "tcp:8080:interface=localhost" # serve on localhost:8080
if __name__ == '__main__': # $ python proxy_modify_request.py
import sys
from twisted.internet import endpoints, reactor
def shutdown(reason, reactor, stopping=[]):
"""Stop the reactor."""
if stopping: return
stopping.append(True)
if reason:
log.msg(reason.value)
reactor.callWhenRunning(reactor.stop)
log.startLogging(sys.stdout)
endpoint = endpoints.serverFromString(reactor, portstr)
d = endpoint.listen(ProxyFactory())
d.addErrback(shutdown, reactor)
reactor.run()
else: # $ twistd -ny proxy_modify_request.py
from twisted.application import service, strports
application = service.Application("proxy_modify_request")
strports.service(portstr, ProxyFactory()).setServiceParent(application)
Usage
$ twistd -ny proxy_modify_request.py
In another terminal:
$ curl -x localhost:8080 http://example.com
For two-way proxy using twisted see the article:
http://sujitpal.blogspot.com/2010/03/http-debug-proxy-with-twisted.html
Hello I am working on develop a rpc server based on twisted to serve several microcontrollers which make rpc call to twisted jsonrpc server. But the application also required that server send information to each micro at any time, so the question is how could be a good practice to prevent that the response from a remote jsonrpc call from a micro be confused with a server jsonrpc request which is made for a user.
The consequence that I am having now is that micros are receiving bad information, because they dont know if netstring/json string that is comming from socket is their response from a previous requirement or is a new request from server.
Here is my code:
from twisted.internet import reactor
from txjsonrpc.netstring import jsonrpc
import weakref
creds = {'user1':'pass1','user2':'pass2','user3':'pass3'}
class arduinoRPC(jsonrpc.JSONRPC):
def connectionMade(self):
pass
def jsonrpc_identify(self,username,password,mac):
""" Each client must be authenticated just after to be connected calling this rpc """
if creds.has_key(username):
if creds[username] == password:
authenticated = True
else:
authenticated = False
else:
authenticated = False
if authenticated:
self.factory.clients.append(self)
self.factory.references[mac] = weakref.ref(self)
return {'results':'Authenticated as %s'%username,'error':None}
else:
self.transport.loseConnection()
def jsonrpc_sync_acq(self,data,f):
"""Save into django table data acquired from sensors and send ack to gateway"""
if not (self in self.factory.clients):
self.transport.loseConnection()
print f
return {'results':'synced %s records'%len(data),'error':'null'}
def connectionLost(self, reason):
""" mac address is searched and all reference to self.factory.clientes are erased """
for mac in self.factory.references.keys():
if self.factory.references[mac]() == self:
print 'Connection closed - Mac address: %s'%mac
del self.factory.references[mac]
self.factory.clients.remove(self)
class rpcfactory(jsonrpc.RPCFactory):
protocol = arduinoRPC
def __init__(self, maxLength=1024):
self.maxLength = maxLength
self.subHandlers = {}
self.clients = []
self.references = {}
""" Asynchronous remote calling to micros, simulating random calling from server """
import threading,time,random,netstring,json
class asyncGatewayCalls(threading.Thread):
def __init__(self,rpcfactory):
threading.Thread.__init__(self)
self.rpcfactory = rpcfactory
"""identifiers of each micro/client connected"""
self.remoteMacList = ['12:23:23:23:23:23:23','167:67:67:67:67:67:67','90:90:90:90:90:90:90']
def run(self):
while True:
time.sleep(10)
while True:
""" call to any of three potential micros connected """
mac = self.remoteMacList[random.randrange(0,len(self.remoteMacList))]
if self.rpcfactory.references.has_key(mac):
print 'Calling %s'%mac
proto = self.rpcfactory.references[mac]()
""" requesting echo from selected micro"""
dataToSend = netstring.encode(json.dumps({'method':'echo_from_micro','params':['plop']}))
proto.transport.write(dataToSend)
break
factory = rpcfactory(arduinoRPC)
"""start thread caller"""
r=asyncGatewayCalls(factory)
r.start()
reactor.listenTCP(7080, factory)
print "Micros remote RPC server started"
reactor.run()
You need to add a enough information to each message so that the recipient can determine how to interpret it. Your requirements sounds very similar to those of AMP, so you could either use AMP instead or use the same structure as AMP to identify your messages. Specifically:
In requests, put a particular key - for example, AMP uses "_ask" to identify requests. It also gives these a unique value, which further identifies that request for the lifetime of the connection.
In responses, put a different key - for example, AMP uses "_answer" for this. The value matches up with the value from the "_ask" key in the request the response is for.
Using an approach like this, you just have to look to see whether there is an "_ask" key or an "_answer" key to determine if you've received a new request or a response to a previous request.
On a separate topic, your asyncGatewayCalls class shouldn't be thread-based. There's no apparent reason for it to use threads, and by doing so it is also misusing Twisted APIs in a way which will lead to undefined behavior. Most Twisted APIs can only be used in the thread in which you called reactor.run. The only exception is reactor.callFromThread, which you can use to send a message to the reactor thread from any other thread. asyncGatewayCalls tries to write to a transport, though, which will lead to buffer corruption or arbitrary delays in the data being sent, or perhaps worse things. Instead, you can write asyncGatewayCalls like this:
from twisted.internet.task import LoopingCall
class asyncGatewayCalls(object):
def __init__(self, rpcfactory):
self.rpcfactory = rpcfactory
self.remoteMacList = [...]
def run():
self._call = LoopingCall(self._pokeMicro)
return self._call.start(10)
def _pokeMicro(self):
while True:
mac = self.remoteMacList[...]
if mac in self.rpcfactory.references:
proto = ...
dataToSend = ...
proto.transport.write(dataToSend)
break
factory = ...
r = asyncGatewayCalls(factory)
r.run()
reactor.listenTCP(7080, factory)
reactor.run()
This gives you a single-threaded solution which should have the same behavior as you intended for the original asyncGatewayCalls class. Instead of sleeping in a loop in a thread in order to schedule the calls, though, it uses the reactor's scheduling APIs (via the higher-level LoopingCall class, which schedules things to be called repeatedly) to make sure _pokeMicro gets called every ten seconds.