How to handle a bidirectional grpc stream asynchronously - python

I have a game or for that matter any remote user interface with a server and multiple clients which should communicate via network.
Both client and server should be able to send updates asynchronously.
This seems to be a very natural service definition, which let's grpc manage sessions.
syntax = "proto3";
package mygame;
service Game {
rpc participate(stream ClientRequest) returns (ServerResponse);
}
message ClientRequest {
// Fields for the initial request and further updates
}
message ServerResponse {
// Game updates
}
Implementing the client is trivial (although the following code is obviously incomplete and simplified).
class Client:
def __init__(self):
self.channel = grpc.insecure_channel("localhost:50051")
self.stub = game_pb2_grpc.GameStub(channel)
self.output_queue = queue.Queue()
def output_iter(self):
while True:
client_output_msg = self.output_queue.get()
self.output_queue.task_done()
yield client_output_msg
def do_work(self):
for response in self.stub.participate(self.output_iter()):
print(response) # handle update
with grpc.insecure_channel("localhost:50051") as channel:
client = Client()
client.do_work()
What seems hard is implementing the server without blocking.
class Game(game_pb2_grpc.GameServicer):
def __init__(self):
self.pending_events = queue.Queue()
def participate(self, request_iter, context):
for client_update in request_iter:
print(client_update)
# !!!
# The next bit won't happen if the client has no updates
# !!!
try:
while True:
server_update = self.pending_events.get_nowait()
yield server_update
except queue.Empty:
pass
server = grpc.server(ThreadPoolExecutor(max_workers=100))
game_pb2_grpc.add_GameServicer_to_server(Game(), server)
server.add_insecure_port("[::]:50051")
server.start()
server.wait_for_termination()
As commented in the code, the client won't receive updates if it doesn't constantly send requests.
Maybe a async approach would be better, which might also solve other problems in this design.
PS: This issue has been solved with grpc in go here, however i don't see how to translate this to pythons grpc implementations.
I would be very happy about any help!

I was finally able to get it working using the python asynio api.
The basic idea is to decouple read and write into two coroutines using asyncio.create_task.
For anybody interested, here is a solution.
class Game(game_pb2_grpc.GameServicer):
async def read_client_requests(self, request_iter):
async for client_update in request_iter:
print("Recieved message from client:", client_update, end="")
async def write_server_responses(self, context):
for i in range(15):
await context.write(game_pb2.ServerResponse(dummy_value=str(i)))
await asyncio.sleep(0.5)
async def participate(self, request_iter, context):
read_task = asyncio.create_task(self.read_client_requests(request_iter))
write_task = asyncio.create_task(self.write_server_responses(context))
await read_task
await write_task
async def serve():
server = grpc.aio.server()
game_pb2_grpc.add_GameServicer_to_server(Game(), server)
server.add_insecure_port("[::]:50051")
await server.start()
await server.wait_for_termination()
if __name__ == "__main__":
asyncio.run(serve())
Note that instead of the write coroutine, a yield would also be sufficient.

Related

When is async/await needed in writing to socket in python asnycio.Protocol?

I'm confused as to when and why the async/await syntax would be needed, or not, when using the low level asyncio.Protocol approach in python. Suppose I have a subclass of asyncio.Protocol, say EchoClientProtocol, and that has a method send_message that external code can use to send any message to an echo server. For example:
import asyncio
class EchoClientProtocol(asyncio.Protocol):
def __init__(self, message):
self.transport = None
self.message = message
def send_message(self,txt):
self.message = txt
self.transport.write(self.message.encode())
print('Data sent: {!r}'.format(self.message))
def connection_made(self, transport):
self.transport=transport
self.send_message(self.message)
def data_received(self, data):
print('Data received: {!r}'.format(data.decode()))
def connection_lost(self, exc):
print('The server closed the connection')
self.on_con_lost.set_result(True)
We can create and run the client with code that uses the familiar async/away syntax, with some auto-reconnect logic built in:
async def main():
while True:
try:
message="Initial message here"
transport, protocol = await loop.create_connection(
lambda: EchoClientProtocol(message),
'127.0.0.1',
8888
)
except OSError:
print("Server not up, retrying in 5 seconds...")
await asyncio.sleep(5)
else:
break
loop = asyncio.get_event_loop()
asyncio.run(main())
The above client class EchoClientProtocol does not declare send_message as async, and does not await the self.transport.write() method. I have seen code like this in many examples online. Why would this work if it uses asyncio? Isn't the async/await syntax necessary? Why or why not?
Suppose I have two clients in the same python script and both have to run simultaneously without blocking each other. For example, suppose that one echo client connects to a server at port 8888 and another connect to another client server at port 8899. I can use async.gather to make sure that both run simultaneously, but in that case do I have to declare send_message as async, and do I have to await self.transport.write()? In other words, what would I have to change in the EchoClientProtocol above so that I could run two or more such clients in the same script (connecting to different servers on different ports) to make them both run without blocking each other?

Python-socketio: How to connect one client to multiple servers?

There is plenty of information and examples when it comes to connecting to one server with multiple clients. But I was wondering is there a way for one client to connect to two servers at the same time? Here is my situation:
I have a python client that brings data from one server, analyzes it and sends an appropriate command to another server. There seems to be less information on this issue, If I may call it.
Here is how I tried approaching the issue. First, I made a socketio.Client class, which would enable me to create two client instances. It did not work. What am I missing here?:
import socketio
class SocketClient(socketio.Client):
def __init__(self, server_ip):
self.server_ip = server_ip # server's ip address
self.sio = socketio.Client(logger=True)
def connect(self):
self.sio.connect(self.server_ip, namespaces=['/my_namespace'])
#self.sio.event
def connect_error(self, error):
print('connection error=> ', error)
#self.sio.event
def my_event(self, server_response):
# Here I have to take the server_response
# and send it to another server.
# How do I do it?
# self.sio.emit('some_event', server_response)
# that does not work, as I do not have the second client instance
pass
#self.sio.event
def my_other_event(self, server_response):
# process the response
pass
# initiate the two client instances:
if __name__ == '__main__':
first_client = SocketClient('http://192.168.100.103')
second_client = SocketClient('http://192.168.100.104')
first_client.connect()
second_client.connect()
after my first try did not work, I ditched the class-instance approach and went for functional one:
import socketio
first_client = socketio.Client()
second_client = socketio.Client()
#second_client.event
#first_client.event
def connect():
print(f'connected with id {first_client.sid}')
#second_client.event
#first_client.event
def connect_error(e):
print('Error=> ', e)
#second_client.event
#first_client.event
def disconnect():
print('disconnected')
#first_client.event
def my_event(server_response):
# Here I have to take the server_response
# and send it to another server.
second_client.emit('some_event', server_response) # is it even possible?
#second_client.event
def my_other_event(server_response):
# handle the response
pass
if __name__ == '__main__':
first_client.connect('http://192.168.100.103')
second_client.connect('http://192.168.100.104')
In both cases, I am technically creating two clients. I might as well make them into separate files like first_client.py and second_client.py.
See where I am going with this? The goal is to get the data from server one, process it and send it to the other server with ideally one client. Please forgive me if I am missing something very obvious here. Any help is much appreciated.
P.S. both servers are up and running without any problem.
I am using NameSpace to solve this problem.
first make a Namespace class
class MyCustomNamespace(socketio.AsyncClientNamespace):
async def on_connect(self):
print("I'm connected!")
async def on_disconnect(self):
print("I'm disconnected!")
async def on_my_event(self, data):
await self.emit('my_response', data)
async def on_message(self, data):
print("[echo]:", data)
class mysio:
def __init__(self) -> None:
global sio
self.sio = socketio.AsyncClient(logger=False, engineio_logger=False)
self.sio.register_namespace(MyCustomNamespace('/')) # bind
then make 2 clients.
since wait() will block the process, I use create_task().
async def main():
async def fun1():
sio1 = mysio().sio
await sio1.connect('http://192.168.3.85:11451')
await sio1.emit('message', b'11111110001')
await sio1.wait()
async def fun2():
sio2 = mysio().sio
await sio2.connect('http://localhost:8080')
await sio2.emit('message', 'from sio2')
await sio2.wait()
tasks = [asyncio.create_task(fun1()),asyncio.create_task(fun2()) ]
await asyncio.wait(tasks)
asyncio.run(main())

Sending data to multiple websocket connections in Python

I have a server that gathers data from a bunch of GPS trackers, and want to ship this data out in real time to X connected clients via WebSockets. The trackers connect over TCP (each in their own thread) and send data regularly to the server. The data is merged in a thread called data_merger and put in that threads queue(). This mechanic works nicely and as intended, however I'm running into issues when I want to send this data to websocket connections.
I tried basing my solution on the websocket synchronization example, as this seemed like it applied to my usecase. I have a thread called outbound_worker that handles the websocket code. From thread.run():
def run(self):
self.data_merger.name = 'data_merger'
self.data_merger.start()
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
start_server = websockets.serve(self.handle_clients, 'localhost', self.port)
print("WebSocker server started for port %s at %s" % (self.port, datetime.now()))
loop.run_until_complete(start_server)
asyncio.get_event_loop().run_forever()
Then the handler method for the websocket server:
async def handle_clients(self, websocket, path):
while True:
try:
# Register the websocket to connected set
await self.register(websocket)
data = self.data_merger.queue.get()
await send_to_clients(data)
await asyncio.sleep(0.1)
except websockets.ConnectionClosed:
print("Connection closed")
await self.unregister(websocket)
break
async def send_to_clients(self, data):
data = json.dumps(data)
if self.connected:
await asyncio.wait([ws.send(data) for ws in self.connected])
The register() and unregister() methods are identical to the example I linked above. The client I'm using is a basic loop that prints the data received:
async def hello():
uri = "ws://localhost:64000"
while True:
async with websockets.connect(uri) as websocket:
print("Awaiting data...")
data = await websocket.recv()
#print(f"{data}")
print(f"{json.loads(data)}")
asyncio.get_event_loop().run_until_complete(hello())
As I am new to asynchronous calls in Python and websockets in general, I'm not sure if my approach is correct here. Since I am trying to push data right after registering a new connection, the code seems to halt at the await send_to_clients(data) line. Should I rather handle this in the data_merger thread and pass the connected set?
Another issue is that if I simply use the client_handler to register() and unregister() the new connections, it seems to just loop over the register() part and I'm unable to connect a second client.
I guess my questions can be condensed into the following:
How do I accept and manage multiple open connections over websocket, similar to a multithreaded socket server?
Is there a way to trigger a function call (for instance register() only on new websocket connections, similar to socket.listen() and socket.accept()?

multi tornado websocket block one another

all.
In my project,I built two websocket connections. one is used for sending pictures from server to client, and the other is used for sending some control data from client to server. now I use only one client.
var ws2 = new WebSocket('ws://xx.xx.xx.xx:9002/control')
var ws1 = new WebSocket('ws://xx.xx.xx.xx:9002/data')
ws.binaryType = 'blob'
ws2.onopen =()=> console.log('ws2 connected.')
On the server side, once ws1 is open, it constently send data to the client. q is a global queue from which I get the data. This part works fine.
The problem is that the program can not run into on_message(self, message) in function in ws2 after I send message from the client side.
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.render("index.html")
class myWebSocket(tornado.websocket.WebSocketHandler):
def check_origin(self, origin):
return True;
def open(self):
print "websocket1 open."
while self.ws_connection is not None:
print "send..."
try:
self.write_message(q.get(),True)
except tornado.websocket.WebSocketClosedError:
self.close()
def on_message(self, message):
print("websocket1 received message.")
print message
def on_close(self):
print("websocket1 closed.")
class controlWebSocket(tornado.websocket.WebSocketHandler):
def check_origin(self, origin):
return True;
def open(self):
print "websocket2 open"
def on_message(self, message):
print("websocket2 received message.")
print message
def on_close(self):
print("websocket2 closed.")
and I am sure the client successfully send the control data by click a button.
<script>
function test(){
ws2.send("this is from ws2.")
alert("home clicked.")
}
</script>
and I also found that once the open function in ws1 stopped, the ws2 can receive the message. I don't understand why ws1 sending data caused ws2 unable to receive data.
I'm new in python programming and tornado websocket. anyone can help on this problem, thanks a lot~!
Tornado is built on non-blocking concurrency. This means you must generally avoid blocking methods (or run them in a thread pool). q.get() is a blocking method, so nothing else in Tornado can run while it is waiting for a message. You should probably replace this queue with a tornado.queues.Queue.

Python 3 websockets - send message before closing connection

I'm new to Stack Overflow (although have been a long-term "stalker"!) so please be gentle with me!
I'm trying to learn Python, in particular Asyncio using websockets.
Having scoured the web for examples/tutorials I've put together the following tiny chat application, and could use some advice before it gets bulkier (more commands etc) and becomes difficult to refactor.
My main question, is why (when sending the DISCONNECT command) does it need the asyncio.sleep(0) in order to send the disconnection verification message BEFORE closing the connection?
Other than that, am I on the right tracks with the structure here?
I feel that there's too much async/await but I can't quite wrap my head around why.
Staring at tutorials and S/O posts for hours on end doesn't seem to be helping at this point so I thought I'd get some expert advice directly!
Here we go, simple WS server that responds to "nick", "msg", "test" & "disconnect" commands. No prefix required, i.e "nick Rachel".
import asyncio
import websockets
import sys
class ChatServer:
def __init__(self):
print("Chat Server Starting..")
self.Clients = set()
if sys.platform == 'win32':
self.loop = asyncio.ProactorEventLoop()
asyncio.set_event_loop(self.loop)
else:
self.loop = asyncio.get_event_loop()
def run(self):
start_server = websockets.serve(self.listen, '0.0.0.0', 8080)
try:
self.loop.run_until_complete(start_server)
print("Chat Server Running!")
self.loop.run_forever()
except:
print("Chat Server Error!")
async def listen(self, websocket, path):
client = Client(websocket=websocket)
sender_task = asyncio.ensure_future(self.handle_outgoing_queue(client))
self.Clients.add(client)
print("+ connection: " + str(len(self.Clients)))
while True:
try:
msg = await websocket.recv()
if msg is None:
break
await self.handle_message(client, msg)
except websockets.exceptions.ConnectionClosed:
break
self.Clients.remove(client)
print("- connection: " + str(len(self.Clients)))
async def handle_outgoing_queue(self, client):
while client.websocket.open:
msg = await client.outbox.get()
await client.websocket.send(msg)
async def handle_message(self, client, data):
strdata = data.split(" ")
_cmd = strdata[0].lower()
try:
# Check to see if the command exists. Otherwise, AttributeError is thrown.
func = getattr(self, "cmd_" + _cmd)
try:
await func(client, param, strdata)
except IndexError:
await client.send("Not enough parameters!")
except AttributeError:
await client.send("Command '%s' does not exist!" % (_cmd))
# SERVER COMMANDS
async def cmd_nick(self, client, param, strdata):
# This command needs a parameter (with at least one character). If not supplied, IndexError is raised
# Is there a cleaner way of doing this? Otherwise it'll need to reside within all functions that require a param
test = param[1][0]
# If we've reached this point there's definitely a parameter supplied
client.Nick = param[1]
await client.send("Your nickname is now %s" % (client.Nick))
async def cmd_msg(self, client, param, strdata):
# This command needs a parameter (with at least one character). If not supplied, IndexError is raised
# Is there a cleaner way of doing this? Otherwise it'll need to reside within all functions that require a param
test = param[1][0]
# If we've reached this point there's definitely a parameter supplied
message = strdata.split(" ",1)[1]
# Before we proceed, do we have a nickname?
if client.Nick == None:
await client.send("You must choose a nickname before sending messages!")
return
for each in self.Clients:
await each.send("%s says: %s" % (client.Nick, message))
async def cmd_test(self, client, param, strdata):
# This command doesn't need a parameter, so simply let the client know they issued this command successfully.
await client.send("Test command reply!")
async def cmd_disconnect(self, client, param, strdata):
# This command doesn't need a parameter, so simply let the client know they issued this command successfully.
await client.send("DISCONNECTING")
await asyncio.sleep(0) # If this isn't here we don't receive the "disconnecting" message - just an exception in "handle_outgoing_queue" ?
await client.websocket.close()
class Client():
def __init__(self, websocket=None):
self.websocket = websocket
self.IPAddress = websocket.remote_address[0]
self.Port = websocket.remote_address[1]
self.Nick = None
self.outbox = asyncio.Queue()
async def send(self, data):
await self.outbox.put(data)
chat = ChatServer()
chat.run()
Your code uses infinite size Queues, which means .put() calls .put_nowait() and returns immediately. (If you do want to keep these queues in your code, consider using 'None' in the queue as a signal to close a connection and move client.websocket.close() to handle_outgoing_queue()).
Another issue: Consider replacing for x in seq: await co(x) with await asyncio.wait([co(x) for x in seq]). Try it with asyncio.sleep(1) to experience a dramatic difference.
I believe a better option will be dropping all outbox Queues and just relay on the built in asyncio queue and ensure_future. The websockets package already includes Queues in its implementation.
I want to point out that the author of websockets indicated in a post on July 17 of 2017 that websockets used to return None when the connection was closed but that was changed at some point. Instead he suggests that you use a try and deal with the exception. The OP's code shows both a check for None AND a try/except. The None check is needlessly verbose and apparently not even accurate since with the current version, websocket.recv() doesn't return anything when the client closes.
Addressing the "main" question, it looks like a race condition of sorts. Remember that asyncio does it's work by going around and touching all the awaited elements in order to nudge them along. If your 'close connection' command is processed at some point ahead of when your queue is cleared, the client will never get that last message in the queue. Adding the async.sleep adds an extra step to the round robin and probably puts your queue emptying task ahead of your 'close connection'.
Addressing the amount of awaits, it's all about how many asynchronous things you need to have happen to accomplish the goal. If you block at any point you'll stop all the other tasks that you want to keep going.

Categories