Make a twister server take initiative - python

I have a server in twisted, implementing a LineReceiver protocol.
When I call sendLine in response to a client message, it writes the line to the client immediately, as one would expect.
But say the client asks the server to do a lengthy calculation. I want the server to periodically send a progress message to the client. When the server takes initiative and calls sendLine without the client having asked for anything, it seems to wait for the client to send a message to the server before sending anything.
How do I send a message from the server to the client immediately, without having the client explicitly ask for it?

Use deferreds if you perform calculation asynchronously.
Other way if it's some long calculation in separate Thread, started by lets say deferrToThread(), use reactor.callFromThread()
(I assume we don't do heavy calculation in main loop - that's very, very wrong :))
little example:
def some_long_foo(data_array, protocol):
def send_msg(msg, protocol):
# It actually looks petter in classes without pushing protocol here and
# there
protocol.transport.write(msg)
for n, chunk in enumerate(data_array):
do_something_cool(chunk)
if n and (n % 10 == 0):
from twisted.internet import reactor
# here send_msg will be safely executed in main reactor loop
reactor.callFromThread(send_msg, '10 more chunks processed',
protocol)
# Somwhere in lineReceived we start long calculation
def cb(result):
self.transport.write('got result: {}'.format(result))
d = threads.deferToThread(some_long_foo, data_array, self)
d.addCallback(cb)
Thus now we'll notify client about processing every 10 chunks of data, and then finally send him result.
code may be little incorrect, it's just e.g.
docs
UPD:
just for clarification:
missed sendLine part. Generally it doesn't matter, call it insted of transport.write()

You can send a line using sendLine whenever you want and it should arrive immediately, but you may have a problem related to your server blocking.
A call to sendLine is deferred, so if you make a call in the middle of a bunch of processing, it's possible that it's not being actioned for a while, and then when a message is received, the reactor interrupts the processing, receives the message, and gets the queued message sent before going back to processing. You should read some of the other answers here and make sure that your processing isn't blocking up your main thread.

If by "immediately" you mean "when the client connects", try calling sendLine in your LineReceiver subclass's connectionMade.

Related

Python Sockets: Question regarding Network Buffers when using send() and recv()

First, of, I've read around a fair amount of time including many threads on this site, however I still need some clarification on Sockets, TCP and Networking in Python, as I feel like I don't fully understand what's happening in my program.
I'm sending data from a server to a client via an Unix Domain Socket (AF_UNIX) using TCP (SOCK_STREAM).
On the server side, a process is continuously putting items on a Queue.Queue and another process is sending items to the client by running
while True:
conn.sendall(queue.get())
On the client side, data is read by running
while True:
conn.recv(1024)
# time.sleep(10)
Now, I emulate a slow client by sending the client process to sleep after every call on recv(). What I expect is that the queue on the server side is filled up, since send() should block because the client can't read off data fast enough.
I monitor the number of items send to the client as well as the queue size. What I notice is that several dozen messages (roughly depending on the size of the messages, but slightly different message sizes might behave the same) are sent to the client (which are received by the client with delay, due to time.seep()) before the queue starts to fill up.
What is happening here? Why is send() not blocking immediately?
I suspect that some sort of network or file buffer is involved, which queues the send items and fills up before my implemented queue.
There are a number of buffers in various places in the system, on both the sender and the receiver. Your call to a sending function won't block until all those buffers are filled up. When the receiver drains some of the buffers, data will flow again and eventually it will unblock the send call.
Typically there's a buffer in the sender holding data waiting to be put on the wire, a buffer "in flight" allowing a certain number of bytes to be send before having to wait for the receiver to acknowledge, and lastly receive buffers holding data that has been acknowledged but not yet delivered to the receiving application.
Were this not so, forward progress would be extremely limited. The sender would be stuck waiting to send until the receiver called receive. Then, whichever one finishes first would have to wait for the other one. Even if the sender was finished first, it couldn't make any forward progress at all until the receiver finished processing the previous chunk of data. That would be quite sub-optimal for most applications.

Python websockets get stuck

I have a python server that is available through a websocket endpoint.
During serving a connection, it also communicates with some backend services. This communication is asynchronous and may trigger the send() method of the websocket.
When a single client is served, it seems to work ok. However, when multiple clients are served in parallel, some of the routines that handle the connections get stuck occasionally. More precisely, it seem to block in the recv() method.
The actual code is somehow complex and the issue is slightly more complicated than I have described, nevertheless, I provide a minimal skeleton of code that sketch the way in which I use he websockets:
class MinimalConversation(object):
def __init__(self, ws, worker_sck, messages, should_continue_conversation, should_continue_listen):
self.ws = ws
self.messages = messages
self.worker_sck = worker_sck
self.should_continue_conversation = should_continue_conversation
self.should_continue_listen = should_continue_listen
async def run_conversation(self):
serving_future = asyncio.ensure_future(self.serve_connection())
listening_future = asyncio.ensure_future(self.handle_worker())
await asyncio.wait([serving_future, listening_future], return_when=asyncio.ALL_COMPLETED)
async def serve_connection(self):
while self.should_continue_conversation():
await self.ws.recv()
logger.debug("Message received")
self.sleep_randomly(10, 5)
await self.worker_sck.send(b"Dummy")
async def handle_worker(self):
while self.should_continue_listen():
self.sleep_randomly(50, 40)
await self.worker_sck.recv()
await self.ws.send(self.messages.pop())
def sleep_randomly(self, mean, dev):
delta = random.randint(1, dev) / 1000
if random.random() < .5:
delta *= -1
time.sleep(mean / 1000 + delta)
Obviously, in the real code I do not sleep for random intervals and don't use given list of messages but this sketches the way I handle the websockets. In the real setting, some errors may occur that are sent over the websocket too, so parallel sends() may occur in theory but I have never encountered such a situation.
The code is run from a handler function which is passed as a parameter to websockets.serve(), initialize the MinimalConversation object and calls the run_conversation() method.
My questions are:
Is there something fundamentally wrong with such usage of the websockets?
Are concurrent calls of the send() methods dangerous?
Can you suggest some good practices regarding usage of websockets and asyncio?
Thak you.
recv function yields back only when a message is received, and it seems that there are 2 connections awaiting messages from each other, so there might be a situation similar to "deadlock" when they are waiting for each other's messages and can't send anything. Maybe you should try to rethink the overall algorithm to be safer from this.
And, of course, try adding more debug output and see what really happens.
are concurrent calls of the send() methods dangerous?
If by concurrent you mean in the same thread but in independently scheduled coroutines then parallel send is just fine. But be careful with "parallel" recv on the same connection, because order of coroutine scheduling might be far from obvious and it's what decides which call to recv will get a message first.
Can you suggest some good practices regarding usage of websockets and asyncio?
In my experience, the easiest way is to create a dedicated task for incoming connections which will repeatedly call recv on the connection, until connection is closed. You can store the connection somewhere and delete it in finally block, then it can be used from other coroutines to send something.

How to listen for incoming websocket messages from multiple places using Tornado?

Let's say I have an infinite while loop that awaits the read_message() method of a WebSocket Tornado client connection. Then, I externally trigger a function that sends a message and should get an immediate response.
Since everything is asynchronous, I would assume that when that external call takes place, the execution goes over to it. But when I try to listen for a response inside that call, it throws an AssertionError stating that self.read_future is not None, when it should be.
Here are the methods of the client application. A little earlier, it connects to a server and places the connection in the self.conn variable:
async def loop(self):
while True:
print(await self.conn.read_message())
async def ext_call(self):
self.conn.write_message('Hello, World!')
response = await self.conn.read_message() # This line fails
Why can't I listen for messages in two separate places?
What you're asking for is ambiguous - which messages would go to which place? It looks like you probably mean for the next message to arrive after you send "Hello world" to be handled in ext_call, and all others to be printed in loop. But how could the system know that? Consider that the "Hello world" message could be sent to the client and the response received before the python interpreter has executed the read_message call in ext_call, so it would be routed to the waiting read_message in loop.
In general, you want to use one of these patterns but not both at the same time. If you always have matched request/response pairs of messages, you can read after you send as in ext_call. But if you may have messages that are not a part of request/response pairs, you need one loop that reads all the messages and decides what to do with them (perhaps splitting them up by type and emitting them on one or more queues using the tornado.queues module).

Pattern for a background Twisted server that fills an incoming message queue and empties an outgoing message queue?

I'd like to do something like this:
twistedServer.start() # This would be a nonblocking call
while True:
while twistedServer.haveMessage():
message = twistedServer.getMessage()
response = handleMessage(message)
twistedServer.sendResponse(response)
doSomeOtherLogic()
The key thing I want to do is run the server in a background thread. I'm hoping to do this with a thread instead of through multiprocessing/queue because I already have one layer of messaging for my app and I'd like to avoid two. I'm bringing this up because I can already see how to do this in a separate process, but what I'd like to know is how to do it in a thread, or if I can. Or if perhaps there is some other pattern I can use that accomplishes this same thing, like perhaps writing my own reactor.run method. Thanks for any help.
:)
The key thing I want to do is run the server in a background thread.
You don't explain why this is key, though. Generally, things like "use threads" are implementation details. Perhaps threads are appropriate, perhaps not, but the actual goal is agnostic on the point. What is your goal? To handle multiple clients concurrently? To handle messages of this sort simultaneously with events from another source (for example, a web server)? Without knowing the ultimate goal, there's no way to know if an implementation strategy I suggest will work or not.
With that in mind, here are two possibilities.
First, you could forget about threads. This would entail defining your event handling logic above as only the event handling parts. The part that tries to get an event would be delegated to another part of the application, probably something ultimately based on one of the reactor APIs (for example, you might set up a TCP server which accepts messages and turns them into the events you're processing, in which case you would start off with a call to reactor.listenTCP of some sort).
So your example might turn into something like this (with some added specificity to try to increase the instructive value):
from twisted.internet import reactor
class MessageReverser(object):
"""
Accept messages, reverse them, and send them onwards.
"""
def __init__(self, server):
self.server = server
def messageReceived(self, message):
"""
Callback invoked whenever a message is received. This implementation
will reverse and re-send the message.
"""
self.server.sendMessage(message[::-1])
doSomeOtherLogic()
def main():
twistedServer = ...
twistedServer.start(MessageReverser(twistedServer))
reactor.run()
main()
Several points to note about this example:
I'm not sure how your twistedServer is defined. I'm imagining that it interfaces with the network in some way. Your version of the code would have had it receiving messages and buffering them until they were removed from the buffer by your loop for processing. This version would probably have no buffer, but instead just call the messageReceived method of the object passed to start as soon as a message arrives. You could still add buffering of some sort if you want, by putting it into the messageReceived method.
There is now a call to reactor.run which will block. You might instead write this code as a twistd plugin or a .tac file, in which case you wouldn't be directly responsible for starting the reactor. However, someone must start the reactor, or most APIs from Twisted won't do anything. reactor.run blocks, of course, until someone calls reactor.stop.
There are no threads used by this approach. Twisted's cooperative multitasking approach to concurrency means you can still do multiple things at once, as long as you're mindful to cooperate (which usually means returning to the reactor once in a while).
The exact times the doSomeOtherLogic function is called is changed slightly, because there's no notion of "the buffer is empty for now" separate from "I just handled a message". You could change this so that the function is installed called once a second, or after every N messages, or whatever is appropriate.
The second possibility would be to really use threads. This might look very similar to the previous example, but you would call reactor.run in another thread, rather than the main thread. For example,
from Queue import Queue
from threading import Thread
class MessageQueuer(object):
def __init__(self, queue):
self.queue = queue
def messageReceived(self, message):
self.queue.put(message)
def main():
queue = Queue()
twistedServer = ...
twistedServer.start(MessageQueuer(queue))
Thread(target=reactor.run, args=(False,)).start()
while True:
message = queue.get()
response = handleMessage(message)
reactor.callFromThread(twistedServer.sendResponse, response)
main()
This version assumes a twistedServer which works similarly, but uses a thread to let you have the while True: loop. Note:
You must invoke reactor.run(False) if you use a thread, to prevent Twisted from trying to install any signal handlers, which Python only allows to be installed in the main thread. This means the Ctrl-C handling will be disabled and reactor.spawnProcess won't work reliably.
MessageQueuer has the same interface as MessageReverser, only its implementation of messageReceived is different. It uses the threadsafe Queue object to communicate between the reactor thread (in which it will be called) and your main thread where the while True: loop is running.
You must use reactor.callFromThread to send the message back to the reactor thread (assuming twistedServer.sendResponse is actually based on Twisted APIs). Twisted APIs are typically not threadsafe and must be called in the reactor thread. This is what reactor.callFromThread does for you.
You'll want to implement some way to stop the loop and the reactor, one supposes. The python process won't exit cleanly until after you call reactor.stop.
Note that while the threaded version gives you the familiar, desired while True loop, it doesn't actually do anything much better than the non-threaded version. It's just more complicated. So, consider whether you actually need threads, or if they're merely an implementation technique that can be exchanged for something else.

How should a ZeroMQ worker safely "hang up"?

I started using ZeroMQ this week, and when using the Request-Response pattern I am not sure how to have a worker safely "hang up" and close his socket without possibly dropping a message and causing the customer who sent that message to never get a response. Imagine a worker written in Python who looks something like this:
import zmq
c = zmq.Context()
s = c.socket(zmq.REP)
s.connect('tcp://127.0.0.1:9999')
while i in range(8):
s.recv()
s.send('reply')
s.close()
I have been doing experiments and have found that a customer at 127.0.0.1:9999 of socket type zmq.REQ who makes a fair-queued request just might have the misfortune of having the fair-queuing algorithm choose the above worker right after the worker has done its last send() but before it runs the following close() method. In that case, it seems that the request is received and buffered by the ØMQ stack in the worker process, and that the request is then lost when close() throws out everything associated with the socket.
How can a worker detach "safely" — is there any way to signal "I don't want messages anymore", then (a) loop over any final messages that have arrived during transmission of the signal, (b) generate their replies, and then (c) execute close() with the guarantee that no messages are being thrown away?
Edit: I suppose the raw state that I would want to enter is a "half-closed" state, where no further requests could be received — and the sender would know that — but where the return path is still open so that I can check my incoming buffer for one last arrived message and respond to it if there is one sitting in the buffer.
Edit: In response to a good question, corrected the description to make the number of waiting messages plural, as there could be many connections waiting on replies.
You seem to think that you are trying to avoid a “simple” race condition such as in
... = zmq_recv(fd);
do_something();
zmq_send(fd, answer);
/* Let's hope a new request does not arrive just now, please close it quickly! */
zmq_close(fd);
but I think the problem is that fair queuing (round-robin) makes things even more difficult: you might already even have several queued requests on your worker. The sender will not wait for your worker to be free before sending a new request if it is its turn to receive one, so at the time you call zmq_send other requests might be waiting already.
In fact, it looks like you might have selected the wrong data direction. Instead of having a requests pool send requests to your workers (even when you would prefer not to receive new ones), you might want to have your workers fetch a new request from a requests queue, take care of it, then send the answer.
Of course, it means using XREP/XREQ, but I think it is worth it.
Edit: I wrote some code implementing the other direction to explain what I mean.
I think the problem is that your messaging architecture is wrong. Your workers should use a REQ socket to send a request for work and that way there is only ever one job queued at the worker. Then to acknowledge completion of the work, you could either use another REQ request that doubles as ack for the previous job and request for a new one, or you could have a second control socket.
Some people do this using PUB/SUB for the control so that each worker publishes acks and the master subscribes to them.
You have to remember that with ZeroMQ there are 0 message queues. None at all! Just messages buffered in either the sender or receiver depending on settings like High Water Mark, and type of socket. If you really do need message queues then you need to write a broker app to handle that, or simply switch to AMQP where all communication is through a 3rd party broker.
I've been thinking about this as well. You may want to implement a CLOSE message which notifies the customer that the worker is going away. You could then have the worker drain for a period of time before shutting down. Not ideal, of course, but might be workable.
There is a conflict of interest between sending requests as rapidly as possible to workers, and getting reliability in case a worked crashes or dies. There is an entire section of the ZeroMQ Guide that explains different answers to this question of reliability. Read that, it'll help a lot.
tl;dr workers can/will crash and clients need a resend functionality. The Guide provides reusable code for that, in many languages.
Wouldn't the simplest solution be to have the customer timeout when waiting for the reply and then retry if no reply is received?
Try sleeping before the call to close. This is fixed in 2.1 but not in 2.0 yet.

Categories