Using Python gRPC, I would like to be able to cancel a long-running unary-stream call from the client side, when a threading.Event is set.
def application(stub: StreamsStub, event: threading.Event):
stream = stub.Application(ApplicationStreamRequest())
try:
for resp in stream:
print(resp)
except grpc.RpcError as e:
print(e)
For the time being I am cancelling the stream using the channel.close() method, but of course this closes all connections rather than just this stream.
Could someone suggest how I can use the event to cancel the stream iterator? Thanks
Below is some code for a gRPC UnaryStream call. The server sends an unending number of replies, leaving the client to decide when to stop receiving them.
Instead of using a counter, you can have a thread go off and do some work, and set an event that is checked before calling cancel() instead of checking the counter.
Note: using Python 2.7
Protofile:
syntax = "proto3";
package my_package;
service HeartBeat {
rpc Beats(Counter) returns (stream Counter) {}
}
message Counter {
int32 counter = 1;
}
Client:
from __future__ import print_function
import grpc
import heartbeat_pb2
import heartbeat_pb2_grpc
def get_beats(stub, channel):
try:
result_iterator = stub.Beats(heartbeat_pb2.Counter(counter=i))
for result in result_iterator:
print("Count: {}".format(result.counter))
if result.counter >= 3: # We only wants 3 'beats'
result_iterator.cancel()
except grpc.RpcError as rpc_error:
if rpc_error.code() == grpc.StatusCode.CANCELLED:
pass # Otherwise, a traceback is printed
def run():
with grpc.insecure_channel('localhost:9999') as channel:
stub = heartbeat_pb2_grpc.HeartBeatStub(channel)
get_beats(stub, channel)
if __name__ == '__main__':
run()
Server:
from concurrent import futures
import grpc
from proto_generated import heartbeat_pb2
from proto_generated import heartbeat_pb2_grpc
import time
class HeartBeatServicer(heartbeat_pb2_grpc.HeartBeatServicer):
pass
def Beats(self, request, context):
# Not required, only to show sending the server a message
print("Beats: {}".format(request.counter))
def response_message():
i = 0
while context.is_active():
print("Sending {}".format(i))
response = heartbeat_pb2.Counter(counter=i)
i += 1
time.sleep(1) # Simulate doing work
yield response
return response_message()
def serve():
server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
heartbeat_pb2_grpc.add_HeartBeatServicer_to_server(
HeartBeatServicer(), server)
server.add_insecure_port('[::]:9999')
server.start()
server.wait_for_termination()
if __name__ == '__main__':
serve()
The _Rendezvous object returned by a rpc call implements grpc.RpcError, grpc.Future, and grpc.Call, therefore cancelling the stream is as simple as calling stream.cancel (from grpc.Future interface)
Related
I'm trying to write a python script that connects to a nodejs server using socket.io package. The server receives the events from the client and responds with other events. As an example, let's say that the client sends an "getHome" events and the server responds with a "homePage" event with some data. What I want is so be able to send an event with the client and block the execution of the script until the response is received, process the response and then do something else based on the server response. The code I wrote is:
#!/usr/bin/python3
import socketio
sio = socketio.Client()
#sio.event
def message(data):
print(data)
#sio.event
def homePage(data):
print(data)
sio.connect('http://docedit/socket.io/')
print("First call")
sio.emit("getHome")
print("Second call")
sio.emit("getHome")
The problem is that the second call to "emit" is done before receiving the response for the first one. The output of the script is something like:
First call
Second call
Welcome to Home <- response from the server
Welcome to Home <- response from the server
Reading the documentation, I tried to use "call" instead of "emit" but then the execution blocks forever, even if the homePage function executes normally:
#!/usr/bin/python3
import socketio
sio = socketio.Client()
#sio.event
def message(data):
print(data)
#sio.event
def homePage(data):
print(data)
sio.connect('http://docedit/socket.io/')
print("First call")
sio.call("getHome")
print("Second call")
sio.call("getHome")
Output:
First call
Welcome to Home <- response from the server
I didn't find an example with call so maybe I'm using it wrong...any help?
Best way is to use some kind of lock
from threading import Lock
lock_me = Lock()
lock_me.acquire() #ensure lock is acquire beforehand
import socketio
sio = socketio.Client()
#sio.event
def message(data):
print(data)
#sio.event
def homePage(data):
print(data)
lock_me.release()
sio.connect('http://docedit/socket.io/')
print("First call")
sio.emit("getHome")
lock_me.acquire()
print("Second call")
sio.emit("getHome")
Another way is with conditions or notify :)
The code below is a simplified version of a Tornado based TCP server that is currently used to host a Videotex system. This code was derived from the Tornado documentation and the server has been running in a live environment for some time without issue, however, there is a feature I need to add.
The system currently blocks until a character is received from the client before returning the data via the stream.write. As the system typically runs at 1200 baud at the client end (via a telnet modem), this means that the user has to wait until all stream writes have completed before the next 'user entered' character is processed.
What I would like to do is find a way that would allow me to abandon writing data to stream.write if another character is received form the client.
I am new to Tornado and fairly new to Python, however, I have coded asynchronous functions and threaded solutions in the past using C#.
From the documentation the stream.write operation is asynchronous, I am assuming therefore that the call may return before the data is completely written, I am left thinking that I need a method to abandon/empty/advance the write buffer to stop the write operation if a new char is detected on the stream.read.
One option that would seem to give me what I need is to somehow perform the stream.writes on another thread , however, this approach seems inappropriate when using Tornado's IOLoop etc.
Is there a way to give me the facility I am after? I have full control of the code and am happy to restructure the app if needed.
import logging
import struct
import os
import traceback
from tornado import gen
from tornado.ioloop import IOLoop
from tornado.iostream import StreamClosedError
from tornado.tcpserver import TCPServer
# Configure logging.
logger = logging.getLogger(os.path.basename(__file__))
logger.setLevel(logging.INFO)
# Cache this struct definition; important optimization.
int_struct = struct.Struct("<i")
_UNPACK_INT = int_struct.unpack
_PACK_INT = int_struct.pack
class TornadoServer(TCPServer):
def start(self, port):
self.port = port
server.listen(port)
#gen.coroutine
def handle_stream(self, stream, address):
logging.info("[viewdata] Connection from client address {0}.".format(address))
try:
while True:
char = yield stream.read_bytes(1) # this call blocks
asc = ord(char)
logger.info('[viewdata] Byte Received {0} ({1})'.format(hex(asc), asc))
# Do some processing using the received char and return the appropriate page of data
stream.write('This is the data you asked for...'.encode())
except StreamClosedError as ex:
logger.info("[viewdata] {0} Disconnected: {1} Message: {2}".format(address, type(ex), str(ex)))
except Exception as ex:
logger.error("[viewdata] {0} Exception: {1} Message: {2}".format(address, type(ex), str(ex)))
logger.error(traceback.format_exc())
if __name__ == '__main__':
server = TornadoServer()
server.start(25232)
loop = IOLoop.current()
loop.start()
The main idea is that you move long processing into separate task.
When you receive some new data, you choose what to do (in case below I cancel current operation)
import logging
import os
import traceback
import threading
from tornado import gen
from tornado.ioloop import IOLoop
from tornado.iostream import StreamClosedError
from tornado.tcpserver import TCPServer
# Configure logging.
logger = logging.getLogger(os.path.basename(__file__))
logger.setLevel(logging.INFO)
class TornadoServer(TCPServer):
def start(self, port):
self.port = port
server.listen(port)
async def process_stream(self, stream, char, cancel_event):
asc = ord(char)
logger.info('[viewdata] Byte Received {0} ({1})'.format(hex(asc), asc))
N = 5
for i in range(N):
if cancel_event.is_set():
logger.info('[viewdata] Abort streaming')
break
# Do some processing using the received char and return the appropriate page of data
msg = 'This is the {0} data you asked for...'.format(i)
logger.info(msg)
await stream.write('This is the part {0} of {1} you asked for...'.format(i, N).encode())
await gen.sleep(1.0) # make this processing longer..
async def handle_stream(self, stream, address):
process_stream_future = None
cancel_event = None
logging.info("[viewdata] Connection from client address {0}.".format(address))
while True:
try:
char = await stream.read_bytes(1) # this call blocks
# when received client input, cancel running job
if process_stream_future:
process_stream_future.cancel()
if cancel_event:
cancel_event.set()
cancel_event = threading.Event()
process_stream_future = gen.convert_yielded(
self.process_stream(stream, char, cancel_event))
self.io_loop.add_future(process_stream_future, lambda f: f.result())
except StreamClosedError as ex:
logger.info("[viewdata] {0} Disconnected: {1} Message: {2}".format(address, type(ex), str(ex)))
except Exception as ex:
logger.error("[viewdata] {0} Exception: {1} Message: {2}".format(address, type(ex), str(ex)))
logger.error(traceback.format_exc())
if __name__ == '__main__':
server = TornadoServer()
server.listen(25232)
loop = IOLoop.current()
loop.start()
Summary
I have a client-server application which makes use of Websockets. The backend (server) part is implemented in Python using autobahn.
The server, in addition to serving a Websockets endpoint, runs a series of threads which will feed the Websockets channel with data, though a queue.Queue().
One of these threads has a problem: it crashes at a missing parameter and hangs when resolving the exception.
Implementation details
The server implementation (cut down to highlight the problem):
from autobahn.asyncio.websocket import WebSocketServerProtocol, WebSocketServerFactory
import time
import threading
import arrow
import queue
import asyncio
import json
# backends of components
import dummy
class MyServerProtocol(WebSocketServerProtocol):
def __init__(self):
super().__init__()
print("webserver initialized")
# global queue to handle updates from modules
self.events = queue.Queue()
# consumer
threading.Thread(target=self.push).start()
threading.Thread(target=dummy.Dummy().dummy, args=(self.events,)).start()
def push(self):
""" consume the content of the queue and push it to the browser """
while True:
update = self.events.get()
print(update)
if update:
self.sendMessage(json.dumps(update).encode('utf-8'), False)
print(update)
time.sleep(1)
def worker(self):
print("started thread")
while True:
try:
self.sendMessage(arrow.now().isoformat().encode('utf-8'), False)
except AttributeError:
print("not connected?")
time.sleep(3)
def onConnect(self, request):
print("Client connecting: {0}".format(request.peer))
def onOpen(self):
print("WebSocket connection open.")
def onClose(self, wasClean, code, reason):
print("WebSocket connection closed: {0}".format(reason))
if __name__ == '__main__':
factory = WebSocketServerFactory(u"ws://127.0.0.1:9100")
factory.protocol = MyServerProtocol
loop = asyncio.get_event_loop()
coro = loop.create_server(factory, '0.0.0.0', 9100)
loop.run_until_complete(coro)
loop.run_forever()
The dummy module imported in the code above:
import time
import arrow
class Dummy:
def __init__(self, events):
self.events = events
print("dummy initialized")
def dummy(self):
while True:
self.events.put({
'dummy': {
'time': arrow.now().isoformat()
}
})
time.sleep(1)
The problem
When running the code above and connecting from a client, I get on the output webserver initialized (which proves that the connection was initiated), and WebSocket connection to 'ws://127.0.0.1:9100/' failed: Error in connection establishment: net::ERR_CONNECTION_REFUSED on the client.
When debugging the code, I see that the call to threading.Thread(target=dummy.Dummy().dummy, args=(self.events,)).start() crashes and the debugger (PyCharm) leads me to C:\Program Files (x86)\Python36-32\Lib\asyncio\selector_events.py, specifically to the line 236
# It's now up to the protocol to handle the connection.
except Exception as exc:
if self._debug:
The thread hangs when executing if self._debug but I see on the exceptline (thanks to Pycharm) that
exc: __init__() missing 1 required positional argument: 'events'
My question
Why is this parameter missing? It is provided via the threading.Thread(target=dummy.Dummy().dummy, args=(self.events,)).start() call.
As a side question: why does the thread hangs on the if condition?
Notes
there is never a Traceback thrown by my program (due to the hang)
removing this thread call resolves the issue (the client connects correctly)
The events arg is needed for the constructor, not the dummy method. I think you meant something more like:
d = Dummy(self.events)
threading.Thread(d.dummy).start()
I'm working on a websocket client listening to a tornado server.
Once the client receives a message from server, the client is exiting silently.
Following is the code I've implemented.
#!/usr/bin/python
import tornado.websocket
from tornado import gen
import requests
#gen.coroutine
def test_ws():
client = yield tornado.websocket.websocket_connect("ws://localhost:8888/subscribe/ports")
msg = yield client.read_message()
print(msg)
if __name__ == "__main__":
loop = tornado.ioloop.IOLoop()
loop.run_sync(test_ws)
The client is running until it receives the first message from server. But I want to run indefinitely.
Am I missing something?
Use a loop:
#gen.coroutine
def test_ws():
client = yield tornado.websocket.websocket_connect("ws://localhost:8888/subscribe/ports")
while True:
msg = yield client.read_message()
print(msg)
I'm trying to implement a websocket/wamp client using autobahn|python
and asyncio, and while it's somewhat working, there are parts that have
eluded me.
What I'm really trying to do is implement WAMP in qt5/QML, but this
seemed like an easier path for the moment.
This simplified client mostly copied from online does work. It reads the
time service when the onJoin occurs.
What I'd like to do is trigger this read from an external source.
The convoluted approach I've taken is to run the asyncio event loop in a
thread, and then to send a command over a socket to trigger the read. I
have so far unable to figure out where to put the routine/coroutine so
that it can be found from the reader routine.
I suspect there's a simpler way to go about this but I haven't found it
yet. Suggestions are welcome.
#!/usr/bin/python3
try:
import asyncio
except ImportError:
## Trollius >= 0.3 was renamed
import trollius as asyncio
from autobahn.asyncio import wamp, websocket
import threading
import time
from socket import socketpair
rsock, wsock = socketpair()
def reader() :
data = rsock.recv(100)
print("Received:", data.decode())
class MyFrontendComponent(wamp.ApplicationSession):
def onConnect(self):
self.join(u"realm1")
#asyncio.coroutine
def onJoin(self, details):
print('joined')
## call a remote procedure
##
try:
now = yield from self.call(u'com.timeservice.now')
except Exception as e:
print("Error: {}".format(e))
else:
print("Current time from time service: {}".format(now))
def onLeave(self, details):
self.disconnect()
def onDisconnect(self):
asyncio.get_event_loop().stop()
def start_aloop() :
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
transport_factory = websocket.WampWebSocketClientFactory(session_factory,
debug = False,
debug_wamp = False)
coro = loop.create_connection(transport_factory, '127.0.0.1', 8080)
loop.add_reader(rsock,reader)
loop.run_until_complete(coro)
loop.run_forever()
loop.close()
if __name__ == '__main__':
session_factory = wamp.ApplicationSessionFactory()
session_factory.session = MyFrontendComponent
## 4) now enter the asyncio event loop
print('starting thread')
thread = threading.Thread(target=start_aloop)
thread.start()
time.sleep(5)
print("IN MAIN")
# emulate an outside call
wsock.send(b'a byte string')
You can listen on a socket asynchronous inside the event loop, using loop.sock_accept. You can just call a coroutine to setup the socket inside of onConnect or onJoin:
try:
import asyncio
except ImportError:
## Trollius >= 0.3 was renamed
import trollius as asyncio
from autobahn.asyncio import wamp, websocket
import socket
class MyFrontendComponent(wamp.ApplicationSession):
def onConnect(self):
self.join(u"realm1")
#asyncio.coroutine
def setup_socket(self):
# Create a non-blocking socket
self.sock = socket.socket()
self.sock.setblocking(0)
self.sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.sock.bind(('localhost', 8889))
self.sock.listen(5)
loop = asyncio.get_event_loop()
# Wait for connections to come in. When one arrives,
# call the time service and disconnect immediately.
while True:
conn, address = yield from loop.sock_accept(self.sock)
yield from self.call_timeservice()
conn.close()
#asyncio.coroutine
def onJoin(self, details):
print('joined')
# Setup our socket server
asyncio.async(self.setup_socket())
## call a remote procedure
##
yield from self.call_timeservice()
#asyncio.coroutine
def call_timeservice(self):
try:
now = yield from self.call(u'com.timeservice.now')
except Exception as e:
print("Error: {}".format(e))
else:
print("Current time from time service: {}".format(now))
... # The rest is the same
Thanks for the response dano. Not quite the solution I needed but it pointed me in the right direction. Yes, I wish to have the client mae remote RPC calls from an external trigger.
I came up with the following which allows me to pass a string for the specific call ( though only one is implemented right now)
Here's what I came up with, though I'm not sure how elegant it is.
import asyncio
from autobahn.asyncio import wamp, websocket
import threading
import time
import socket
rsock, wsock = socket.socketpair()
class MyFrontendComponent(wamp.ApplicationSession):
def onConnect(self):
self.join(u"realm1")
#asyncio.coroutine
def setup_socket(self):
# Create a non-blocking socket
self.sock = rsock
self.sock.setblocking(0)
loop = asyncio.get_event_loop()
# Wait for connections to come in. When one arrives,
# call the time service and disconnect immediately.
while True:
rcmd = yield from loop.sock_recv(rsock,80)
yield from self.call_service(rcmd.decode())
#asyncio.coroutine
def onJoin(self, details):
# Setup our socket server
asyncio.async(self.setup_socket())
#asyncio.coroutine
def call_service(self,rcmd):
print(rcmd)
try:
now = yield from self.call(rcmd)
except Exception as e:
print("Error: {}".format(e))
else:
print("Current time from time service: {}".format(now))
def onLeave(self, details):
self.disconnect()
def onDisconnect(self):
asyncio.get_event_loop().stop()
def start_aloop() :
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
transport_factory = websocket.WampWebSocketClientFactory(session_factory,
debug = False,
debug_wamp = False)
coro = loop.create_connection(transport_factory, '127.0.0.1', 8080)
loop.run_until_complete(coro)
loop.run_forever()
loop.close()
if __name__ == '__main__':
session_factory = wamp.ApplicationSessionFactory()
session_factory.session = MyFrontendComponent
## 4) now enter the asyncio event loop
print('starting thread')
thread = threading.Thread(target=start_aloop)
thread.start()
time.sleep(5)
wsock.send(b'com.timeservice.now')
time.sleep(5)
wsock.send(b'com.timeservice.now')
time.sleep(5)
wsock.send(b'com.timeservice.now')