Mixing Synchronous and A-sync code in Python - python

I'm trying to convert a synchronous flow in Python code which is based on callbacks to an A-syncronious flow using asyncio.
Basically the code interacts a lot with TCP/UNIX sockets. It reads data from the sockets, manipulates it to make decisions and writes stuff back to the other side. This is going on over multiple sockets at once and data is shared between the contexts to make decisions sometimes.
EDIT :: The code currently is mostly based on registering a callback to a central entity for a specific socket, and having that entity run the callback when the relevant socket is readable (something like "call this function when that socket has data to be read"). Once the callback is called - a bunch of stuff happens, and eventually a new callback is registered for when new data is available. The central entity runs a select over all sockets registered to figure out which callbacks should be called.
I'm trying to do this without refactoring my entire code and making this as seamless as possible to the programmer - so I was trying to think about it like so - all code should run the same way as it does today - but whenever the current code does a socket.recv() to get new data - the process would yield execution to other tasks. When the read returns, it should go back to handling the data from the same point using the new data it got.
To do this, I wrote a new class called AsyncSocket - which interacts with the IO streams of asyncIO and placed the Async/await statements almost solely in there - thinking that I would implement the recv method in my class to make it look like a "regular IO socket" to the rest of my code.
So far - this is my understanding of what A-sync programming should allow.
Now to the problem :
My code awaits for clients to connect - when it does, each client's context is allowed to read and write from it's own connection.
I've simplified to flow to the following to clarify the problem:
class AsyncSocket():
def __init__(self,reader,writer):
self.reader = reader
self.writer = writer
def recv(self,numBytes):
print("called recv!")
data = self.read_mitigator(numBytes)
return data
async def read_mitigator(self,numBytes):
print("Awaiting of AsyncSocket.reader.read")
data = await self.reader.read(numBytes)
print("Done Awaiting of AsyncSocket.reader.read data is %s " % data)
return data
def mit2(aSock):
return mit3(aSock)
def mit3(aSock):
return aSock.recv(100)
async def echo_server(reader, writer):
print ("New Connection!")
aSock = AsyncSocket(reader,writer) # create a new A-sync socket class and pass it on the to regular code
while True:
data = await some_func(aSock) # this would eventually read from the socket
print ("Data read is %s" % (data))
if not data:
break
writer.write(data) # echo everything back
async def main(host, port):
server = await asyncio.start_server(echo_server, host, port)
await server.serve_forever()
asyncio.run(main('127.0.0.1', 5000))
mit2() and mit3() are synchronous functions that do stuff with the data on the way back before returning to the main client's loop - but here I'm just using them as empty functions.
The problem starts when I play with the implementation of some_func().
A pass through implementation (edit: kind-of-works) - but still has issues :
def some_func(aSock):
try:
return (mit2(aSock)) # works
except:
print("Error!!!!")
While an implementation which reads the data and does something with it - like adding a suffix before returning, throws an error:
def some_func(aSock):
try:
return (mit2(aSock) + "something") # doesn't work
except:
print("Error!!!!")
The error (as far as I understand it) means it's not really doing what it should:
New Connection!
called recv!
/Users/user/scripts/asyncServer.py:36: RuntimeWarning: coroutine 'AsyncSocket.read_mitigator' was never awaited
return (mit2(aSock) + "something") # doesn't work
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
Error!!!!
Data read is None
And the echo server obviously doesn't work.
Obviously my code looks more like option #2 with a lot more stuff in some_func(),mit2() and mit3() - but I can't get this to work. I'm fairly new in using asyncio/async/await - so what (rather basic concept I guess) am I missing?

This code won't work as envisioned:
def recv(self,numBytes):
print("called recv!")
data = self.read_mitigator(numBytes)
return data
async def read_mitigator(self,numBytes):
...
You cannot call an async function from a sync function and get the result, you must await it, which ensures that you return to the event loop in case the data is not yet ready. This mismatch between async and sync code is sometimes referred to as the issue of function color.
Since your code is already using non-blocking sockets and an event loop, a good approach to porting it to asyncio might be to first switch to the asyncio event loop. You can use event loop methods like sock_recv to request data:
def start():
loop = asyncio.get_event_loop()
sock = make_socket() # make sure it's non-blocking
future_data = loop.sock_recv(sock, 1024)
future_data.add_done_callback(continue_read)
# return to the event loop - when some data is ready
# continue_read will be invoked
def continue_read(future):
data = future.result()
print('got', data)
# ... do something with data, e.g. process it
# and call sock_sendall with the response
asyncio.get_event_loop().call_soon(start())
asyncio.get_event_loop().run_forever()
Once you have the program working in that mode, you can start moving to coroutines, which allow the code to look like sync code, but work in exactly the same way:
async def start():
loop = asyncio.get_event_loop()
sock = make_socket() # make sure it's non-blocking
data = await loop.sock_recv(sock, 1024)
# data is available "immediately", meaning the coroutine gets
# automatically suspended when awaiting data that is not yet
# ready, and automatically re-scheduled when the data is ready
print('got', data)
asyncio.run(start())
The next step can be eliminating make_socket and switching to asyncio streams.

Related

MongoDB : extremely huge log file?

I have set up MongoDB (on windows) a while ago to listen to WebSockets. Recently, I was checking the size of the dataset I had collected so far. The total size of my data folder where the collections are stored is about 7GB now. Then, I discovered that surprisingly the size of the log file is 175GB (!!!). For now, I just used the rotate command to generate a new log file and delete the old one. But of course, I am curious as to why this extremely large log file is generated. I checked the content of the file. Amongst the millions (or billions?) of lines of messages, here are a few examples of how the messages typically look:
{"t":{"$date":"2022-01-27T01:52:07.723+01:00"},"s":"I", "c":"-", "id":20883, "ctx":"conn298926","msg":"Interrupted operation as its client disconnected","attr":{"opId":33104664}}
{"t":{"$date":"2022-01-27T01:52:07.723+01:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn298926","msg":"Connection ended","attr":{"remote":"127.0.0.1:61724","uuid":"88da63c3-37dc-416f-a044-dcc3eae36d8b","connectionId":298926,"connectionCount":39}}
{"t":{"$date":"2022-01-27T01:52:07.726+01:00"},"s":"I", "c":"-", "id":20883, "ctx":"conn298929","msg":"Interrupted operation as its client disconnected","attr":{"opId":33104670}}
{"t":{"$date":"2022-01-27T01:52:07.727+01:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn298929","msg":"Connection ended","attr":{"remote":"127.0.0.1:61727","uuid":"7fc3e5c7-5687-474f-aa0e-c83f501c16cc","connectionId":298929,"connectionCount":38}}
{"t":{"$date":"2022-01-27T01:52:07.737+01:00"},"s":"I", "c":"-", "id":20883, "ctx":"conn298932","msg":"Interrupted operation as its client disconnected","attr":{"opId":33104677}}
{"t":{"$date":"2022-01-27T01:52:07.737+01:00"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn298932","msg":"Connection ended","attr":{"remote":"127.0.0.1:61730","uuid":"0c8c0ecf-7aef-4bab-980b-267a3a0c78b7","connectionId":298932,"connectionCount":37}}
...
It seems, there is some connect-disconnect thing (on various ports?) going on, but I have no idea where this comes from or what causes this.
BEFORE YOU READ ON: a simple question first that may not require going through all the code below; if these messages are not "critical" (meaning they indicate that something is going wrong), is it possible to simply turn them off in logging? That would be a simple solution. From the dataset I have, I see that my script is functioning correctly and the data is accurately collected, so I assume, it could be sufficient to simply turn off the logging messages of this type. READ FURTHER FOR MORE DETAILS...
My basic async python code for connecting to the WebSockets and receiving messages looks (shortened) like this:
async def run_socket(manager,subscription=None):
async with websockets.connect(manager.url) as socket:
# receive messages and process
while True:
# receive message
message = await socket.recv()
# process data and store
manager.process_message(message)
return
def run_stream(manager,subscription):
while True:
try:
# create a new event loop
loop = asyncio.new_event_loop()
loop.run_until_complete(run_socket(manager,subscription))
except KeyboardInterrupt as e:
print(e)
loop.close()
return
except Exception as e:
# reconnect in case of exception
print(e)
loop.close()
sleep(5)
return
# MAIN LOOP
if __name__ == "__main__":
# manager
manager = create_websocket_manager(stream)
threads = []
for subscription in manager.generate_subscriptions():
print(subscription)
# create a new event loop
loop = asyncio.new_event_loop()
# create thread for the stream
thread = threading.Thread(target=run_stream,args=(manager,subscription))
thread.daemon = False
# collect threads
threads.append(thread)
# run
thread.start()
for thread in threads:
thread.join()
The function create_websocket_manager (not shown here) creates a custom "WebSocket manager" object that parses and stores the messages that arrive from different streams. All objects share the same base class which (shortened) locks like this:
class WebsocketManager:
def __init__(self,stream):
# ... set_collection(self)
return
def set_collection(self):
# ... uri_mongo, db_name, col_name
self.col = MongoClient(uri_mongo)[db_name][col_name]
def parse_message(self,message): # -> list
# ... message -> results
return results
def process_message(self,message):
results = self.parse_message(message)
if len(results)>0:
self.col.insert_many(results)
return
The manager sets a collection for the stream (self.col = MongoClient(uri_mongo)[db_name][col_name]), then parses and stores the arriving messages to the collection.
The main loop shown further above runs various threads for several streams that are processed by the same manager object. Additionally, I run several python instances of the main loop file for different managers. Maybe the different instances of cmd running the WebSockets are what cause the connect-disconnect thing?
If you read till here, thanks already for going through this long question, and I appreciate your feedback :)
Best, JZ

How to wait for coroutines to complete synchronously within method if event loop is already running?

I'm trying to create a Python-based CLI that communicates with a web service via websockets. One issue that I'm encountering is that requests made by the CLI to the web service intermittently fail to get processed. Looking at the logs from the web service, I can see that the problem is caused by the fact that frequently these requests are being made at the same time (or even after) the socket has closed:
2016-09-13 13:28:10,930 [22 ] INFO DeviceBridge - Device bridge has opened
2016-09-13 13:28:11,936 [21 ] DEBUG DeviceBridge - Device bridge has received message
2016-09-13 13:28:11,937 [21 ] DEBUG DeviceBridge - Device bridge has received valid message
2016-09-13 13:28:11,937 [21 ] WARN DeviceBridge - Unable to process request: {"value": false, "path": "testcube.pwms[0].enabled", "op": "replace"}
2016-09-13 13:28:11,936 [5 ] DEBUG DeviceBridge - Device bridge has closed
In my CLI I define a class CommunicationService that is responsible for handling all direct communication with the web service. Internally, it uses the websockets package to handle communication, which itself is built on top of asyncio.
CommunicationService contains the following method for sending requests:
def send_request(self, request: str) -> None:
logger.debug('Sending request: {}'.format(request))
asyncio.ensure_future(self._ws.send(request))
...where ws is a websocket opened earlier in another method:
self._ws = await websockets.connect(websocket_address)
What I want is to be able to await the future returned by asyncio.ensure_future and, if necessary, sleep for a short while after in order to give the web service time to process the request before the websocket is closed.
However, since send_request is a synchronous method, it can't simply await these futures. Making it asynchronous would be pointless as there would be nothing to await the coroutine object it returned. I also can't use loop.run_until_complete as the loop is already running by the time it is invoked.
I found someone describing a problem very similar to the one I have at mail.python.org. The solution that was posted in that thread was to make the function return the coroutine object in the case the loop was already running:
def aio_map(coro, iterable, loop=None):
if loop is None:
loop = asyncio.get_event_loop()
coroutines = map(coro, iterable)
coros = asyncio.gather(*coroutines, return_exceptions=True, loop=loop)
if loop.is_running():
return coros
else:
return loop.run_until_complete(coros)
This is not possible for me, as I'm working with PyRx (Python implementation of the reactive framework) and send_request is only called as a subscriber of an Rx observable, which means the return value gets discarded and is not available to my code:
class AnonymousObserver(ObserverBase):
...
def _on_next_core(self, value):
self._next(value)
On a side note, I'm not sure if this is some sort of problem with asyncio that's commonly come across or whether I'm just not getting it, but I'm finding it pretty frustrating to use. In C# (for instance), all I would need to do is probably something like the following:
void SendRequest(string request)
{
this.ws.Send(request).Wait();
// Task.Delay(500).Wait(); // Uncomment If necessary
}
Meanwhile, asyncio's version of "wait" unhelpfully just returns another coroutine that I'm forced to discard.
Update
I've found a way around this issue that seems to work. I have an asynchronous callback that gets executed after the command has executed and before the CLI terminates, so I just changed it from this...
async def after_command():
await comms.stop()
...to this:
async def after_command():
await asyncio.sleep(0.25) # Allow time for communication
await comms.stop()
I'd still be happy to receive any answers to this problem for future reference, though. I might not be able to rely on workarounds like this in other situations, and I still think it would be better practice to have the delay executed inside send_request so that clients of CommunicationService do not have to concern themselves with timing issues.
In regards to Vincent's question:
Does your loop run in a different thread, or is send_request called by some callback?
Everything runs in the same thread - it's called by a callback. What happens is that I define all my commands to use asynchronous callbacks, and when executed some of them will try to send a request to the web service. Since they're asynchronous, they don't do this until they're executed via a call to loop.run_until_complete at the top level of the CLI - which means the loop is running by the time they're mid-way through execution and making this request (via an indirect call to send_request).
Update 2
Here's a solution based on Vincent's proposal of adding a "done" callback.
A new boolean field _busy is added to CommunicationService to represent if comms activity is occurring or not.
CommunicationService.send_request is modified to set _busy true before sending the request, and then provides a callback to _ws.send to reset _busy once done:
def send_request(self, request: str) -> None:
logger.debug('Sending request: {}'.format(request))
def callback(_):
self._busy = False
self._busy = True
asyncio.ensure_future(self._ws.send(request)).add_done_callback(callback)
CommunicationService.stop is now implemented to wait for this flag to be set false before progressing:
async def stop(self) -> None:
"""
Terminate communications with TestCube Web Service.
"""
if self._listen_task is None or self._ws is None:
return
# Wait for comms activity to stop.
while self._busy:
await asyncio.sleep(0.1)
# Allow short delay after final request is processed.
await asyncio.sleep(0.1)
self._listen_task.cancel()
await asyncio.wait([self._listen_task, self._ws.close()])
self._listen_task = None
self._ws = None
logger.info('Terminated connection to TestCube Web Service')
This seems to work too, and at least this way all communication timing logic is encapsulated within the CommunicationService class as it should be.
Update 3
Nicer solution based on Vincent's proposal.
Instead of self._busy we have self._send_request_tasks = [].
New send_request implementation:
def send_request(self, request: str) -> None:
logger.debug('Sending request: {}'.format(request))
task = asyncio.ensure_future(self._ws.send(request))
self._send_request_tasks.append(task)
New stop implementation:
async def stop(self) -> None:
if self._listen_task is None or self._ws is None:
return
# Wait for comms activity to stop.
if self._send_request_tasks:
await asyncio.wait(self._send_request_tasks)
...
You could use a set of tasks:
self._send_request_tasks = set()
Schedule the tasks using ensure_future and clean up using add_done_callback:
def send_request(self, request: str) -> None:
task = asyncio.ensure_future(self._ws.send(request))
self._send_request_tasks.add(task)
task.add_done_callback(self._send_request_tasks.remove)
And wait for the set of tasks to complete:
async def stop(self):
if self._send_request_tasks:
await asyncio.wait(self._send_request_tasks)
Given that you're not inside an asynchronous function you can use the yield from keyword to effectively implement await yourself. The following code will block until the future returns:
def send_request(self, request: str) -> None:
logger.debug('Sending request: {}'.format(request))
future = asyncio.ensure_future(self._ws.send(request))
yield from future.__await__()

How to use Tornado.gen.coroutine in TCP Server?

i write a Tcp Server with Tornado.
here is the code:
#! /usr/bin/env python
#coding=utf-8
from tornado.tcpserver import TCPServer
from tornado.ioloop import IOLoop
from tornado.gen import *
class TcpConnection(object):
def __init__(self,stream,address):
self._stream=stream
self._address=address
self._stream.set_close_callback(self.on_close)
self.send_messages()
def send_messages(self):
self.send_message(b'hello \n')
print("next")
self.read_message()
self.send_message(b'world \n')
self.read_message()
def read_message(self):
self._stream.read_until(b'\n',self.handle_message)
def handle_message(self,data):
print(data)
def send_message(self,data):
self._stream.write(data)
def on_close(self):
print("the monitored %d has left",self._address)
class MonitorServer(TCPServer):
def handle_stream(self,stream,address):
print("new connection",address,stream)
conn = TcpConnection(stream,address)
if __name__=='__main__':
print('server start .....')
server=MonitorServer()
server.listen(20000)
IOLoop.instance().start()
And i face some eorror assert self._read_callback is None, "Already reading",i guess the eorror is because multiple commands to read from socket at the same time.and then i change the function send_messages with tornado.gen.coroutine.here is code:
#gen.coroutine
def send_messages(self):
yield self.send_message(b'hello \n')
response1 = yield self.read_message()
print(response1)
yield self.send_message(b'world \n')
print((yield self.read_message()))
but there are some other errors. the code seem to stop after yield self.send_message(b'hello \n'),and the following code seem not to execute.
how should i do about it ? If you're aware of any Tornado tcpserver (not HTTP!) code with tornado.gen.coroutine,please tell me.I would appreciate any links!
send_messages() calls send_message() and read_message() with yield, but these methods are not coroutines, so this will raise an exception.
The reason you're not seeing the exception is that you called send_messages() without yielding it, so the exception has nowhere to go (the garbage collector should eventually notice and print the exception, but that can take a long time). Whenever you call a coroutine, you should either use yield to wait for it to finish, or IOLoop.current().spawn_callback() to run the coroutine in the "background" (this tells Tornado that you do not intend to yield the coroutine, so it will print the exception as soon as it occurs). Also, whenever you override a method you should read the documentation to see whether coroutines are allowed (when you override TCPServer.handle_stream() you can make it a coroutine, but __init__() may not be a coroutine).
Once the exception is getting logged, the next step is to fix it. You can either make send_message() and read_message() coroutines (getting rid of the handle_message() callback in the process), or you can use tornado.gen.Task() to call coroutine-style code from a coroutine. I generally recommend using coroutines everywhere.

Scheduling a corroutine for a later loop iteration

I borrowed this code of a simple chat:
import tornado.ioloop
import tornado.web
import tornado.websocket
import tornado.gen
clients = []
class IndexHandler(tornado.web.RequestHandler):
#tornado.web.asynchronous
def get(request):
request.render("index.html")
class WebSocketChatHandler(tornado.websocket.WebSocketHandler):
def open(self, *args):
print("open", "WebSocketChatHandler")
clients.append(self)
def check_origin(self, origin):
return True
#tornado.gen.coroutine
def on_message(self, message):
for client in clients:
client.write_message(message)
#tornado.gen.coroutine
def myroutine(m):
print "mensaje: "
c = (yield 123123123)
print ("mensaje", m, c)
yield myroutine(message)
def on_close(self):
clients.remove(self)
app = tornado.web.Application([(r'/chat', WebSocketChatHandler), (r'/', IndexHandler)])
app.listen(8888)
tornado.ioloop.IOLoop.instance().start()
The Chat application works well (i.e. I see the echoes using a websocket client), and I modified it a bit to test some custom code.
And, just for testing purposes, I wanted to insert a presumably heavy function call which I wanted to make asynchronous.
The actual intention here, is that myroutine will start a game-engine as a paralell task.
Perhaps I am missing something, but the intention in my code is to re-schedule the corroutine in two parts. This means: the corroutine should print "message", then yield the value 123123123 (actually, this is an immediate value which will be wrapped into an already-resolved future - the value will be in the result), thus rescheduling itself to the next iteration, and (in the latter iteration) print the given tuple ("message", message, c).
My issue is that the function is never rescheduled (i.e. only "message:" is printed by console).
What am I doing wrong? This is my first attempt at Tornado (and async programming in general). How can I tell the tornado loop something like "dude, this value is my corruotine, and those are the arguments for my corroutine. please, start it in paralell by scheduling it in the next loop"?
There are two problems going on: first, you can't yield every kind of object from a coroutine, you must yield a Future or other special yieldable object. So when your coroutine yields 123123, Tornado throws a "bad yield" exception. Unfortunately, Tornado's websocket code isn't built to catch exceptions from "on_message" if "on_message" is a coroutine, so the exception passes silently. See the warning at the bottom of the coroutine documentation.
The solution for you is to yield a valid object from "mycoroutine". If you just want to yield for a moment, yield "gen.moment":
print "one"
yield gen.moment
print "two"
If you want "mycoroutine" to run in parallel and not block "on_message", just call it without yielding:
mycoroutine(message)
But! Calling a coroutine this way means no one is listening to see if it throws an exception. Make sure you catch and log all exceptions within "mycoroutine", since otherwise they will pass silently.

Correct use of coroutine in Tornado web server

I'm trying to convert a simple syncronous server to an asyncronous version, the server receives post requestes and it retrieves the response from an external web service (amazon sqs). Here's the syncronous code
def post(self):
zoom_level = self.get_argument('zoom_level')
neLat = self.get_argument('neLat')
neLon = self.get_argument('neLon')
swLat = self.get_argument('swLat')
swLon = self.get_argument('swLon')
data = self._create_request_message(zoom_level, neLat, neLon, swLat, swLon)
self._send_parking_spots_request(data)
#....other stuff
def _send_parking_spots_request(self, data):
msg = Message()
msg.set_body(json.dumps(data))
self._sqs_send_queue.write(msg)
Reading Tornado documentation and some threads here I ended with this code using coroutines:
def post(self):
zoom_level = self.get_argument('zoom_level')
neLat = self.get_argument('neLat')
neLon = self.get_argument('neLon')
swLat = self.get_argument('swLat')
swLon = self.get_argument('swLon')
data = self._create_request_message(zoom_level, neLat, neLon, swLat, swLon)
self._send_parking_spots_request(data)
self.finish()
#gen.coroutine
def _send_parking_spots_request(self, data):
msg = Message()
msg.set_body(json.dumps(data))
yield gen.Task(write_msg, self._sqs_send_queue, msg)
def write_msg(queue, msg, callback=None):
queue.write(msg)
Comparing the performances using siege I get that the second version is even worse than the original one, so probably there's something about coroutines and Torndado asyncronous programming that I didn't understand at all.
Could you please help me with this?
Edit: self._sqs_send_queue it's a queue object retrieved from boto interface and queue.write(msg) returns the message that has been written on the queue
tornado relies on you converting all your I/O to be non-blocking. Simply sticking the same code you were using before inside of a gen.Task will not improve performance at all, because the I/O itself is still going to block the event loop. Additionally, you need to make your post method a coroutine, and call _send_parking_spots_requests using yield for the code to behave properly. So, a "correct" solution would look something like this:
#gen.coroutine
def post(self):
...
yield self._send_parking_spots_request(data) # wait (without blocking the event loop) until the method is done
self.finish()
#gen.coroutine
def _send_parking_spots_request(self, data):
msg = Message()
msg.set_body(json.dumps(data))
yield gen.Task(write_msg, self._sqs_send_queue, msg)
def write_msg(queue, msg, callback=None):
yield queue.write(msg, callback=callback) # This has to do non-blocking I/O.
In this example, queue.write would need to be some API that sends your request using non-blocking I/O, and executes callback when a response is received. Without knowing exactly what queue in your original example is, I can't specify exactly how that can be implemented in your case.
Edit: Assuming you're using boto, you may want to check out bototornado, which implements the exact same API I described above:
def write(self, message, callback=None):
"""
Add a single message to the queue.
:type message: Message
:param message: The message to be written to the queue
:rtype: :class:`boto.sqs.message.Message`
:return: The :class:`boto.sqs.message.Message` object that was written.

Categories