How do I convert a asynchronous design pattern to a synchronous one? - python

I have written a script that gets the account value from my stock trading, which looks like this (working):
from ib.opt import ibConnection, message
def account_summary_handler(msg):
print(msg.tag, msg.value, msg.currency)
connection.cancelAccountSummary(1)
connection.disconnect()
connection = ibConnection(port=7497, clientId=100)
connection.register(account_summary_handler, 'AccountSummary')
connection.connect()
connection.reqAccountSummary(1, 'All', 'NetLiquidation')
It's a realtime API, so once the connection is open, updates are streaming, and my_account_handler gets called on every update.
I want to get the account balance from another synchronous script, something like, called something like:
myaccount.account_summary()
I've written something that looks like this (untested):
class IBAccount(object):
def __init__(self):
self.clientId=100
self.port=7497
def connect(self):
self.connection = ibConnection(port=self.port, clientId=self.clientId)
def account_summary(self):
self.connection.register(self.account_summary_handler, 'AccountSummary')
connection.connect()
connection.reqAccountSummary(1, 'All', 'NetLiquidation')
def account_summary_handler(self, msg):
self.connection.cancelAccountSummary(1)
self.connection.disconnect()
return msg.value
I believe I need account_summary() to block, and return the actual account value, rather than have it come back through a different function.
My question is:
How do I get account_summary() to return the account value?
In addition, if I'm using an inappropriate design pattern for this code and shouldn't use a class, please advise.

The solution was to make it stated:
from ib.opt import ibConnection, message
class IBstate(object):
def __init__(self):
self.clientId=100
self.port=7496
self.netLiquidation = None
self.connection = ibConnection(port=self.port, clientId=self.clientId)
self.connection.connect()
self.connection.register(self.account_summary_handler, 'AccountSummary')
self.connection.reqAccountSummary(1, 'All', 'NetLiquidation')
def account_summary_handler(self, msg):
self.netLiquidation = msg
This runs in the background and updates the variables on the object as they change on the remote end.

Related

How to mock a pika connection for a different module?

I have a class that imports the following module:
import pika
import pickle
from apscheduler.schedulers.background import BackgroundScheduler
import time
import logging
class RabbitMQ():
def __init__(self):
self.connection = pika.BlockingConnection(pika.ConnectionParameters(host="localhost"))
self.channel = self.connection.channel()
self.sched = BackgroundScheduler()
self.sched.add_job(self.keep_connection_alive, id='clean_old_data', trigger='cron', hour = '*', minute='*', second='*/50')
self.sched.start()
def publish_message(self, message , path="path"):
message["path"] = path
logging.info(message)
message = pickle.dumps(message)
self.channel.basic_publish(exchange="", routing_key="server", body=message)
def keep_connection_alive(self):
self.connection.process_data_events()
rabbitMQ = RabbitMQ()
def publish_message(message , path="path"):
rabbitMQ.publish_message(message, path=path)
My class.py:
import RabbitMQ as rq
class MyClass():
...
When generating unit tests for MyClass I can't mock the connection for this part of the code. And keeping throwing exceptions. And it will not work at all
pika.exceptions.ConnectionClosed: Connection to 127.0.0.1:5672 failed: [Errno 111] Connection refused
I tried a couple of approaches to mock this connection but none of those seem to work. I was wondering what can I do to support this sort of test? Mock the entire RabbitMQ module? Or maybe mock only the connection
Like the commenter above mentions, the issue is your global creation of your RabbitMQ.
My knee-jerk reaction is to say "just get rid of that, and your module-level publish_message". If you can do that, go for that solution. You have a publish_message on your RabbitMQ class that accepts the same args; any caller would then be expected to create an instance of your RabbitMQ class.
If you don't want to or can't do that for whatever reason, you should just move the instantiation of move that object instantiation in your module-level publish_message like this:
def publish_message(message , path="path"):
rabbitMQ = RabbitMQ()
rabbitMQ.publish_message(message, path=path)
This will create a new connection every time you call it though. Maybe that's ok...but maybe it's not. So to avoid creating duplicate connections, you'd want to introduce something like a singleton pattern:
class RabbitMQ():
__instance = None
...
#classmethod
def get_instance(cls):
if cls.__instance is None:
cls.__instance = RabbitMQ()
return cls.__instance
def publish_message(message , path="path"):
RabbitMQ.get_instance().publish_message(message, path=path)
Ideally though, you'd want to avoid the singleton pattern entirely. Whatever caller should store a single instance of your RabbitMQ object and call publish_message on it directly.
So the TLDR/ideal solution IMO: Just get rid of those last 3 lines. The caller should create a RabbitMQ object.
EDIT: Oh, and the why it's happening -- When you import that module, this is being evaluated: rabbitMQ = RabbitMQ(). Your attempt to mock it is happening after that is evaluated, and fails to connect.

variables inside BaseHTTPRequestHandler Python

I am creating a chatbot using Python and MS Bot Builder SDK for Python.
The bot is a HTTPServer using a handler. What I want are variables to help me keeping track of the conversation, for example a message counter. But I can't get it to work, each time the bot receives a request (me sending something), it's like another handler is created, cause the number of messages is always 1. I'm not sure what is being called on each request.
Here is the (important) code:
class BotRequestHandler(BaseHTTPRequestHandler):
count = 0
#staticmethod
def __create_reply_activity(request_activity, text):
# not important
def __handle_conversation_update_activity(self, activity):
# not important
def __handle_message_activity(self, activity):
self.count += 1 ############## INCREMENTATION ##############
self.send_response(200)
self.end_headers()
credentials = MicrosoftAppCredentials(APP_ID, APP_PASSWORD)
connector = ConnectorClient(credentials, base_url=activity.service_url)
reply = BotRequestHandler.__create_reply_activity(activity, '(%d) You said: %s' % (self.count, activity.text))
connector.conversations.send_to_conversation(reply.conversation.id, reply)
def __handle_authentication(self, activity):
# not important
def __unhandled_activity(self):
# not important
def do_POST(self):
body = self.rfile.read(int(self.headers['Content-Length']))
data = json.loads(str(body, 'utf-8'))
activity = Activity.deserialize(data)
if not self.__handle_authentication(activity):
return
if activity.type == ActivityTypes.conversation_update.value:
self.__handle_conversation_update_activity(activity)
elif activity.type == ActivityTypes.message.value:
self.__handle_message_activity(activity)
else:
self.__unhandled_activity()
class BotServer(HTTPServer):
def __init__(self):
super().__init__(('localhost', 9000), BotRequestHandler)
def _run(self):
try:
print('Started http server')
self.serve_forever()
except KeyboardInterrupt:
print('^C received, shutting down server')
self.socket.close()
server = BotServer()
server._run()
What I get if enter the message 'a' 4 times is '(1) You said: a' 4 times.
I tried overrifing init method of BaseHTTPRequestHandler but it didn't work.
For those who know: the thing is with Python SDK we don't have Waterfall dialogs like in Node.js, or I didn't find how it works, if someone knows just tell me, cause here I need to keep track of a lot of things from the user and I need variables. And I really want to use Python because I need some ML and other modules in Python.
Thank you for your help.

Correct use of coroutine in Tornado web server

I'm trying to convert a simple syncronous server to an asyncronous version, the server receives post requestes and it retrieves the response from an external web service (amazon sqs). Here's the syncronous code
def post(self):
zoom_level = self.get_argument('zoom_level')
neLat = self.get_argument('neLat')
neLon = self.get_argument('neLon')
swLat = self.get_argument('swLat')
swLon = self.get_argument('swLon')
data = self._create_request_message(zoom_level, neLat, neLon, swLat, swLon)
self._send_parking_spots_request(data)
#....other stuff
def _send_parking_spots_request(self, data):
msg = Message()
msg.set_body(json.dumps(data))
self._sqs_send_queue.write(msg)
Reading Tornado documentation and some threads here I ended with this code using coroutines:
def post(self):
zoom_level = self.get_argument('zoom_level')
neLat = self.get_argument('neLat')
neLon = self.get_argument('neLon')
swLat = self.get_argument('swLat')
swLon = self.get_argument('swLon')
data = self._create_request_message(zoom_level, neLat, neLon, swLat, swLon)
self._send_parking_spots_request(data)
self.finish()
#gen.coroutine
def _send_parking_spots_request(self, data):
msg = Message()
msg.set_body(json.dumps(data))
yield gen.Task(write_msg, self._sqs_send_queue, msg)
def write_msg(queue, msg, callback=None):
queue.write(msg)
Comparing the performances using siege I get that the second version is even worse than the original one, so probably there's something about coroutines and Torndado asyncronous programming that I didn't understand at all.
Could you please help me with this?
Edit: self._sqs_send_queue it's a queue object retrieved from boto interface and queue.write(msg) returns the message that has been written on the queue
tornado relies on you converting all your I/O to be non-blocking. Simply sticking the same code you were using before inside of a gen.Task will not improve performance at all, because the I/O itself is still going to block the event loop. Additionally, you need to make your post method a coroutine, and call _send_parking_spots_requests using yield for the code to behave properly. So, a "correct" solution would look something like this:
#gen.coroutine
def post(self):
...
yield self._send_parking_spots_request(data) # wait (without blocking the event loop) until the method is done
self.finish()
#gen.coroutine
def _send_parking_spots_request(self, data):
msg = Message()
msg.set_body(json.dumps(data))
yield gen.Task(write_msg, self._sqs_send_queue, msg)
def write_msg(queue, msg, callback=None):
yield queue.write(msg, callback=callback) # This has to do non-blocking I/O.
In this example, queue.write would need to be some API that sends your request using non-blocking I/O, and executes callback when a response is received. Without knowing exactly what queue in your original example is, I can't specify exactly how that can be implemented in your case.
Edit: Assuming you're using boto, you may want to check out bototornado, which implements the exact same API I described above:
def write(self, message, callback=None):
"""
Add a single message to the queue.
:type message: Message
:param message: The message to be written to the queue
:rtype: :class:`boto.sqs.message.Message`
:return: The :class:`boto.sqs.message.Message` object that was written.

Using Twisted AMP with Database insertion

I am learning how to use Twisted AMP. I am developing a program that sends data from a client to a server and inserts the data in a SQLite3 DB. The server then sends back a result to the client which indicates success or error (try and except might not be the best way to do this but it is only a temporary solution while I work out the main problem). In order to do this I modified an example I found that originally did a sum and returned the result, so I realize that this might not be the most efficient way to do what I am trying to do. In particular I am trying to do some timings on multiple insertions (i.e. send the data to the server multiple times for multiple insertions) and I have included the code I have written. It works but clearly it is not a good way to send multiple data for insertion since I am performing multiple connections before running the reactor.
I have tried several ways to get around this including passing the ClientCreator to reactor.callWhenRunning() but you cannot do this with a deferred.
Any suggestions, advice or help with how to do this would be much appreciated. Here is the code.
Server:
from twisted.protocols import amp
from twisted.internet import reactor
from twisted.internet.protocol import Factory
import sqlite3, time
class Insert(amp.Command):
arguments = [('data', amp.Integer())]
response = [('insert_result', amp.Integer())]
class Protocol(amp.AMP):
def __init__(self):
self.conn = sqlite3.connect('biomed1.db')
self.c =self.conn.cursor()
self.res=None
#Insert.responder
def dbInsert(self, data):
self.InsertDB(data) #call the DB inserter
result=self.res # send back the result of the insertion
return {'insert_result': result}
def InsertDB(self,data):
tm=time.time()
print "insert time:",tm
chx=data
PID=2
device_ID=5
try:
self.c.execute("INSERT INTO btdata4(co2_data, patient_Id, sensor_Id) VALUES ('%s','%s','%s')" % (chx, PID, device_ID))
except Exception, err:
print err
self.res=0
else:
self.res=1
self.conn.commit()
pf = Factory()
pf.protocol = Protocol
reactor.listenTCP(1234, pf)
reactor.run()
Client:
from twisted.internet import reactor
from twisted.internet.protocol import ClientCreator
from twisted.protocols import amp
import time
class Insert(amp.Command):
arguments = [('data', amp.Integer())]
response = [('insert_result', amp.Integer())]
def connected(protocol):
return protocol.callRemote(Insert, data=5555).addCallback(gotResult)
def gotResult(result):
print 'insert_result:', result['insert_result']
tm=time.time()
print "stop", tm
def error(reason):
print "error", reason
tm=time.time()
print "start",tm
for i in range (10): #send data over ten times
ClientCreator(reactor, amp.AMP).connectTCP(
'127.0.0.1', 1234).addCallback(connected).addErrback(error)
reactor.run()
End of Code.
Thank you.
Few things which will improve your Server code.
First and foremost: The use of direct database access functions is discouraged in twisted, as they normally causes block. Twisted has nice abstraction for database access which provides twisted approach to db connection - twisted.adbapi
Now on to reuse of db connection: If you want to reuse certain assets (like database connection) across a number of Protocol instances, you should initialize those in constructor of Factory or if you dont fancy initiating such things at a launch time, create an resource access method, which will initiate resource upon first method call then assign it to class variable and return that on subsequent calls.
When Factory creates a specific Protocol intance, it will add a reference to itself inside the protocol, see line 97 of twisted.internet.protocol
Then within your Protocol instance, you can access shared database connection instance like:
self.factory.whatever_name_for_db_connection.doSomething()
Reworked Server code (I dont have python, twisted or even decent IDE available, so this is pretty much untested, some errors are to be expected)
from twisted.protocols import amp
from twisted.internet import reactor
from twisted.internet.protocol import Factory
import time
class AMPDBAccessProtocolFactory(Factory):
def getDBConnection(self):
if 'dbConnection' in dir(self):
return self.dbConnection
else:
self.dbConnection = SQLLiteTestConnection(self.dbURL)
return self.dbConnection
class SQLLiteTestConnection(object):
"""
Provides abstraction for database access and some business functions.
"""
def __init__(self,dbURL):
self.dbPool = adbapi.ConnectionPool("sqlite3" , dbURL, check_same_thread=False)
def insertBTData4(self,data):
query = "INSERT INTO btdata4(co2_data, patient_Id, sensor_Id) VALUES (%s,%s,%s)"
tm=time.time()
print "insert time:",tm
chx=data
PID=2
device_ID=5
dF = self.dbPool.runQuery(query,(chx, PID, device_ID))
dF.addCallback(self.onQuerySuccess,insert_data=data)
return dF
def onQuerySuccess(self,insert_data,*r):
"""
Here you can inspect query results or add any other valuable information to be parsed at client.
For the test sake we will just return True to a customer if query was a success.
original data available at kw argument insert_data
"""
return True
class Insert(amp.Command):
arguments = [('data', amp.Integer())]
response = [('insert_result', amp.Integer())]
class MyAMPProtocol(amp.AMP):
#Insert.responder
def dbInsert(self, data):
db = self.factory.getDBConnection()
dF = db.insertBTData4(data)
dF.addErrback(self.onInsertError,data)
return dF
def onInsertError(self, error, data):
"""
Here you could do some additional error checking or inspect data
which was handed for insert here. For now we will just throw the same exception again
so that the client gets notified
"""
raise error
if __name__=='__main__':
pf = AMPDBAccessProtocolFactory()
pf.protocol = MyAMPProtocol
pf.dbURL='biomed1.db'
reactor.listenTCP(1234, pf)
reactor.run()
Now on to the client. IF AMP follows the overall RPC logic (cant test it currently) it should be able to peruse the same connection across a number of calls. So I have created a ServerProxy class which will hold that perusable protocol instance and provide abstraction for calls:
from twisted.internet import reactor
from twisted.internet.protocol import ClientCreator
from twisted.protocols import amp
import time
class Insert(amp.Command):
arguments = [('data', amp.Integer())]
response = [('insert_result', amp.Integer())]
class ServerProxy(object):
def connected(self,protocol):
self.serverProxy = protocol # assign protocol as instance variable
reactor.callLater(5,self.startMultipleInsert) #after five seconds start multiple insert procedure
def remote_insert(self,data):
return self.serverProxy.callRemote(Insert, data)
def startMultipleInsert(self):
for i in range (10): #send data over ten times
dF = self.remote_insert(i)
dF.addCallback(self.gotInsertResult)
dF.addErrback(error)
def gotInsertResult(self,result):
print 'insert_result:', str(result)
tm=time.time()
print "stop", tm
def error(reason):
print "error", reason
def main():
tm=time.time()
print "start",tm
serverProxy = ServerProxy()
ClientCreator(reactor, amp.AMP).connectTCP('127.0.0.1', 1234).addCallback(serverProxy.connected).addErrback(error)
reactor.run()
if __name__=='__main__':
main()

Writing a blocking wrapper around twisted's IRC client

I'm trying to write a dead-simple interface for an IRC library, like so:
import simpleirc
connection = simpleirc.Connect('irc.freenode.net', 6667)
channel = connection.join('foo')
find_command = re.compile(r'google ([a-z]+)').findall
for msg in channel:
for t in find_command(msg):
channel.say("http://google.com/search?q=%s" % t)
Working from their example, I'm running into trouble (code is a bit lengthy, so I pasted it here). Since the call to channel.__next__ needs to be returned when the callback <IRCClient instance>.privmsg is called, there doesn't seem to be a clean option. Using exceptions or threads seems like the wrong thing here, is there a simpler (blocking?) way of using twisted that would make this possible?
In general, if you're trying to use Twisted in a "blocking" way, you're going to run into a lot of difficulties, because that's neither the way it's intended to be used, nor the way in which most people use it.
Going with the flow is generally a lot easier, and in this case, that means embracing callbacks. The callback-style solution to your question would look something like this:
import re
from twisted.internet import reactor, protocol
from twisted.words.protocols import irc
find_command = re.compile(r'google ([a-z]+)').findall
class Googler(irc.IRCClient):
def privmsg(self, user, channel, message):
for text in find_command(message):
self.say(channel, "http://google.com/search?q=%s" % (text,))
def connect():
cc = protocol.ClientCreator(reactor, Googler)
return cc.connectTCP(host, port)
def run(proto):
proto.join(channel)
def main():
d = connect()
d.addCallback(run)
reactor.run()
This isn't absolutely required (but I strongly suggest you consider trying it). One alternative is inlineCallbacks:
import re
from twisted.internet import reactor, protocol, defer
from twisted.words.protocols import irc
find_command = re.compile(r'google ([a-z]+)').findall
class Googler(irc.IRCClient):
def privmsg(self, user, channel, message):
for text in find_command(message):
self.say(channel, "http://google.com/search?q=%s" % (text,))
#defer.inlineCallbacks
def run():
cc = protocol.ClientCreator(reactor, Googler)
proto = yield cc.connectTCP(host, port)
proto.join(channel)
def main():
run()
reactor.run()
Notice no more addCallbacks. It's been replaced by yield in a decorated generator function. This could get even closer to what you asked for if you had a version of Googler with a different API (the one above should work with IRCClient from Twisted as it is written - though I didn't test it). It would be entirely possible for Googler.join to return a Channel object of some sort, and for that Channel object to be iterable like this:
#defer.inlineCallbacks
def run():
cc = protocol.ClientCreator(reactor, Googler)
proto = yield cc.connectTCP(host, port)
channel = proto.join(channel)
for msg in channel:
msg = yield msg
for text in find_command(msg):
channel.say("http://google.com/search?q=%s" % (text,))
It's only a matter of implementing this API on top of the ones already present. Of course, the yield expressions are still there, and I don't know how much this will upset you. ;)
It's possible to go still further away from callbacks and make the context switches necessary for asynchronous operation to work completely invisible. This is bad for the same reason it would be bad for sidewalks outside your house to be littered with invisible bear traps. However, it's possible. Using something like corotwine, itself based on a third-party coroutine library for CPython, you can have the implementation of Channel do the context switching itself, rather than requiring the calling application code to do it. The result might look something like:
from corotwine import protocol
def run():
proto = Googler()
transport = protocol.gConnectTCP(host, port)
proto.makeConnection(transport)
channel = proto.join(channel)
for msg in channel:
for text in find_command(msg):
channel.say("http://google.com/search?q=%s" % (text,))
with an implementation of Channel that might look something like:
from corotwine import defer
class Channel(object):
def __init__(self, ircClient, name):
self.ircClient = ircClient
self.name = name
def __iter__(self):
while True:
d = self.ircClient.getNextMessage(self.name)
message = defer.blockOn(d)
yield message
This in turn depends on a new Googler method, getNextMessage, which is a straightforward feature addition based on existing IRCClient callbacks:
from twisted.internet import defer
class Googler(irc.IRCClient):
def connectionMade(self):
irc.IRCClient.connectionMade(self)
self._nextMessages = {}
def getNextMessage(self, channel):
if channel not in self._nextMessages:
self._nextMessages[channel] = defer.DeferredQueue()
return self._nextMessages[channel].get()
def privmsg(self, user, channel, message):
if channel not in self._nextMessages:
self._nextMessages[channel] = defer.DeferredQueue()
self._nextMessages[channel].put(message)
To run this, you create a new greenlet for the run function and switch to it, and then start the reactor.
from greenlet import greenlet
def main():
greenlet(run).switch()
reactor.run()
When run gets to its first asynchronous operation, it switches back to the reactor greenlet (which is the "main" greenlet in this case, but it doesn't really matter) to let the asynchronous operation complete. When it completes, corotwine turns the callback into a greenlet switch back into run. So run is granted the illusion of running straight through, like a "normal" synchronous program. Keep in mind that it is just an illusion, though.
So, it's possible to get as far away from the callback-oriented style that is most commonly used with Twisted as you want. It's not necessarily a good idea, though.

Categories