FIFO (named pipe) messaging obstacles - python

I plan to use Unix named pipes (mkfifo) for simple multi-process messaging.
A message would be just a single line of text.
Would you discourage me from that? What obstacles should I expect?
I have noticed these limitations:
A sender cannot continue until the message is received.
A receiver is blocked until there are some data. Nonblocking IO would be needed
when we need to stop the reading. For example, another thread could ask for that.
The receiver could obtain many messages in a single read. These have to be processed
before quiting.
The max length of an atomic message is limited by 4096 bytes. That is the PIPE_BUF limit on Linux (see man 7 pipe).
I will implement the messaging in Python. But the obstacles hold in general.

Lack of portability - they are mainly a Unix thing. Sockets are more portable.
Harder to scale out to multiple systems (another sockets+)
On the other hand, I believe pipes are faster than sockets for processes on the same machine (less communication overhead).
As to your limitations,
You can "select" on pipes, to do a non-blocking read.
I normally (in perl) print out my messages on pipes seperated by "\n", and read a line from them to get one message at a time.
Do be careful with the atomic length.
I find perlipc to be a good discussion between the various options, though it has perl specific code.

The blocking, both on the sender side and the receiver side, can be worked around via non-blocking I/O.
Further limitations of FIFOs:
Only one client at a time.
After the client closes the FIFO, the server need to re-open its endpoint.
Unidirectional.
I would use UNIX domain sockets instead, which have none of the above limitations.
As an added benefit, if you want to scale it to communicate between multiple machines, it's barely any change at all. For example, just take the Python documentation page on socket and replace socket.AF_INET with socket.AF_UNIX, (HOST, PORT) with filename, and it just works.
SOCK_STREAM will give you stream-like behavior; that is, two sends may be merged into one receive or vice versa. AF_UNIX also supports SOCK_DGRAM: datagrams are guaranteed to be sent and read all as one unit or not at all. (Analogously, AF_INET+SOCK_STREAM=TCP, AF_INET+SOCK_DGRAM=UDP.)

Related

How to run duplex communication using python sockets

I have 3 Raspberry Pi's, all on the same LAN doing stuff that is monitored by Python and I want them to talk to each other, and to my PC. Sockets seem like the way to go, but the examples are so simplistic. Here's the issue I am stuck on - the listen and receive processes are all blocking, unless you set a timeout, in which case they still block, just for less time.
So, if I set up a round-robin, then each Pi will only be listened to (or received on) for 1/3 of the time, or less if there is stuff to transmit as well.
What I'd like to understand better is what happens to the data (or connection requests) when I am not listening/receiving - are these buffered by the OS, or lost..? What happens to the socket when there is no method called, is it happy to be ignored for a while, or will the socket itself be dumped by the OS..?
I am starting to split these into separate processes now, which is getting messy and seems inefficient, but I can't think of another way except to run this as 3 (currently), maybe 6 (transmit/receive) or even 9 (listen/transmit/receive) separate processes..?
Sorry I don't have a code example, but it is already way tooo big, and it doesn't work. plus a lot of the issue seems to me to be in the murky part of the sockets - that part between the socket and the OS. I feel I need to understand this better to get to the right architecture for my bit of code before I really start debugging the various exceptions and communication failures...
You can handle multiple sockets in a single process using I/O multiplexing. This is usually done using calls such as epoll(), poll() or select(). These calls monitor multiple sockets and return when one or more sockets have data available for reading. Or are ready to write data to. In many cases this is more convenient than using multiple processes and/or threads.
These calls are pretty low level OS calls. Python seems to have higher level functionality that might be easier to use but I haven't tried this myself.

ZeroMQ: How to construct simple asynchronous broker? Seems impossible

I am building a simple star-like client-server topology.
The idea is that clients connect to the server, can send messages, and the server can send messages to them, when the server decides to. There will be a relatively small number of clients, about 30, but so many that it is not sensible to send all outgoing data to all. I'm sure I'm just boneheaded, but this seems to be completely impossible with ZeroMQ.
The last part is the reason this question does not provide answer.
The catch is this :
I can use a ROUTER socket to receive messages from clients. This also carries identification. However, I cannot use the same socket for sending, since ZeroMQ sockets are not threadsafe. I.e. I can't have one thread waiting for incoming messages, and another sending outgoing from the server itself. I am not aware of any way I could wait in blocking for both - socket.recv(), and for example .get() on a queue - at the same time on a single thread in python. Maybe there is a way to do that.
Using two sockets - one incoming one outgoing - doesn't work either. The identification is not shared between sockets, and so the sending socket would still have to be polled to obtain client id mapping, if even for once. We obviously can't use own port for each client. There seems to be no way for the server to send a message to a single client out of it's own volition.
(subscription topics are a dead idea too: message filtering is performed on client-side, and the server would just flood all client networks)
In the end TCP sockets can handle this sort of asynchronous situation easily, but effective message framing on python is a nightmare to build. All I'm essentially after is a reliable socket that handles messages, and has well defined failure modes.
I don't know Python but for C/C++ I would use zmq_poll(). There are several options, depending on your requirements.
Use zmq_poll() to wait for messages from clients. If a message arrives, process it. Also use a time-out. When the time-out expires, check if you need to send messages to clients and send them.
zmq_poll() can also wait on general file descriptors. You can use some type of file descriptor and trigger it (write to it) from another process or thread when you have a message to send to a client. If this file descriptor is triggered, send messages to clients.
Use ZeroMQ sockets internally inside your server. Use zmq_poll() to wait both on messages from clients and internal processes or threads. If the internal sockets are triggered, send messages to clients.
You can use the file descriptor or internal ZeroMQ sockets just for triggering but you can also send the message content through the file descriptor or ZeroMQ socket.
Q : "ZeroMQ: How to construct simple asynchronous broker?"
The concept builds on a few assumptions that are not supported or do not hold :
a)Python threads actually never execute concurrently, they are re-[SERIAL]-ised into a sequence of soloists execution blocks & for any foreseeable future will remain such, since ever & forever (as Guido van ROSSUM has explained this feature to be a pyramidal reason for collision prevention - details on GIL-lock, serving this purpose, are countless )
b)ZeroMQ thread-safeness has nothing to do with using a blocking-mode for operations.
c)ZeroMQ PUB/SUB archetype does perform a topic-filtering, yet in different versions on different sides of the "ocean" :
Until v3.1, subscription mechanics ( a.k.a. a TOPIC-filter ) was handled on the SUB-side, so this part of the processing got distributed among all SUB-s ( at a cost of uniformly wide data-traffic across all transport-classes involved ) and there was no penalty, except for a sourcing such data-flow related workload ... on the PUB-side.
Since v3.1, the TOPIC-filter is processed on the PUB-side, at a cost of such a processing overhead & memory allocations, but saving all the previously wasted transport-capacities, consumed just to later realise at the SUB-side the message is not matching the TOPIC-filter and will be disposed off.
Using a .poll()-based & zmq.NOBLOCK-modes of .recv()- & .send()-methods in the code design will never leave one in ambiguous, the less in an unsalvagable deadlock waiting-state and adds the capability to design even a lightweight priority-driven soft-scheduler for doing so with different relative priority levels.
Given your strong exposure in realtime systems, you might like to have a read into this to review the ZeroMQ Framework properties.

ZeroMQ: socket per data type or just one socket?

I've got a program which receives information from about 10 other (sensor reading) programs (all controlled by myself). I now want to make them communicate using ZeroMQ.
For most of the queues the important thing is that the central receiving program always has the latest sensor data, all older messages are not important anymore. If a couple messages get lost I don't care. So for all of them I started out with a separate PUB/SUB socket; one for each program. But I'm not sure if that is the right way to do it. As far as I understand I have two options:
Make a separate socket for every program and read them out in a loop. That way I know by the socket what the information is I'm receiving (I'm often just sending an int).
Make one socket to which all the programs connect, and with every message I send a string which tells the receiving end what the message is about.
All connections are on a PUB/SUB basis, so creating one socket would well work out. I'm just not sure if that is the most efficient way to do it.
All tips are welcome!
- PUB/SUB is fine and allows an easy conversion from N-sensors:1-logger into N-sensors:2+-loggers- one might also benefit from a conceptual separation of a socket from an access-port, where more than one sockets may get connected
How to get always JUST THE ACTUAL ( LAST ) SENSOR READOUT:
If not bound, due to system-integration constraints, to some early ZeroMQ API, there is a lovely feature exactly for this via a .setsockopt( ZMQ_CONFLATE, True ) method:
ZMQ_CONFLATE: Keep only last message
If set, a socket shall keep only one message in its inbound/outbound queue, this message being the last message received/the last message to be sent. Ignores ZMQ_RCVHWM and ZMQ_SNDHWM options. Does not support multi-part messages, in particular, only one part of it is kept in the socket internal queue.
On design dilemma:
Unless your real-time control stability introduces some hard-real-time limit, the PUB-side freely decides, how often a new value is instructed to .send() to SUB(-s). Here no magic is needed, the less with ZMQ_CONFLATE option set on the internal outgoing queue managed.
The SUB(-s) side receiver(s) will also benefit from the ZMQ_CONFLATE option set on the internal incoming queue managed, but given a set of individual .bind()-s instantiate separate landing ports for delivery of different individual sensoric readouts, your "last" values will remain consistently the "last"-readouts. If all readouts would go into a common landing pad, your receiving process will get masked-out ( lost ) all readouts but the one that was just accidentally the "last" right before .recv() took place, which would not help much, would it?
If some I/O-performance related tweaking becomes necessary, the .Context( n_IO_threads ) + ZMQ_AFFINITY-mapping options may increase and prioritise the resources the ioDataPump may harness for increased IO-performance
Unless you're up against a tight real time requirement there's not much point in having more sockets than necessary. ZMQ's fair queuing ought to take care of giving each sensor program equal attention (see Figure 6 in the guide)
If your sensor programs are on other devices connected by Ethernet, the ultimate performance of your programs is limited by the bandwidth of the Ethernet NIC in your computer. A single thread program handling a single PULL socket stands a good chance of being able to process the data coming in faster than it can transit the NIC.
If that's so, then you may as well stick to a single socket and enjoy the simpler code. It's not very hard dealing with multiple sockets, but it's far easier to deal with one. For example, with one single socket you don't have to tell each sensor program what network port to connect to - it can be a constant.
PUSH/PULL sounds like a more natural pattern for your situation than PUB/SUB, but that won't make much difference.
Lastness
Lastness is going to be your (potential) problem. The whole point of things like ZMQ is that they will deliver messages in the order they're sent. Thus you read a message, it is by definition the "last" message so far as the recipient is concerned. The recipient has no idea as to whether or not there is another message on the way, in transit.
This is a feature of Actor model architectures (which is what ZMQ is). Messages get buffered up in the transport, and there's no information about the newness of the message to be learned when it's read. All you know is that it was sent some time beforehand. There is no execution rendezvous with the sender.
Now, you either process it as if it is the last message, or you wait for a period of time to see if another one comes along before processing it. The easiest thing to do is to simply process each message as if it is the last.
Contrast this with a Communicating Sequential Processes architecture. It's basically the same as an Actor model architecture, except that the transport does not buffer messages. Message sends block until the recipient has called message read.
Thus when you read a message, the recipient knows that it the last one sent by the sender. And the sender knows that the message it has sent has been received at that very instant by the recipient. So the knowledge of lastness is absolute - the message received really is the last one sent.
However, unless you have something fairly heavyweight going on I wouldn't worry about it. You are quite likely to be able to keep up with your sensor data stream even if the messages you're reading aren't the latest in the queue.
You can nearly make ZMQ into CSP by setting the high water limit on the sending end's socket to 1. That means that you can buffer up at most 1 message. That's not the same as 0, and unfortunately setting the HWM to 0 means "unlimited size buffer".

How to manage socket connections for a chat server (Python) via sockets and select module

Sorry to bother everyone with this, but I've been stumped for a while now.
The problem is that I decided to reconfigure this chat program I had using sockets so that instead of a client and a sever/client, it would have a server, and then two separate clients.
I asked earlier as to how I might get my server to 'manage' these connections of the clients so that it could redirect the data between them. And I got a fantastic answer that provided me with exactly the code I would apparently need to do this.
The problem is I don't understand how it works, and I did ask in the comments but I didn't get much of a reply except for some links to documentation.
Here's what I was given:
connections = []
while True:
rlist,wlist,xlist = select.select(connections + [s],[],[])
for i in rlist:
if i == s:
conn,addr = s.accept()
connections.append(conn)
continue
data = i.recv(1024)
for q in connections:
if q != i and q != s:
q.send(data)
As far as I understand, the select module gives the ability to make waitable objects in the case of select.select.
I've got the rlist, the pending to be read list, the wlist, the pending to be written list, and then the xlist, the pending exceptional condition.
He's assigning the pending to be written list to "s" which in my part of the chat server, is the socket that is listening on the assigned port.
That's about as much as I feel I understand clearly enough. But I would really really like some explanation.
If you don't feel like I asked an appropriate question, tell me in the comments and I'll delete it. I don't want to violate any rules, and I'm pretty sure I am not duplicating threads as I did do research for a while before resorting to asking.
Thanks!
Note: my explanation here assumes you're talking about TCP sockets, or at least some type which is connection-based. UDP and other datagram (i.e. non-connection-based) sockets are similar in some ways, but the way you use select on them in slightly different.
Each socket is like an open file which can have data read and written to it. Data that you write goes into a buffer inside the system waiting to be sent out on the network. Data that arrives from the network is buffered inside the system until you read it. Lots of clever stuff is going on underneath, but when you're using a socket that's all you really need to know (at least initially).
It's often useful to remember that the system is doing this buffering in the explanation that follows, because you'll realise that the TCP/IP stack in the OS sends and receives data independently of your application - this is done so your application can have a simple interface (that's what the socket is, a way of hiding all the TCP/IP complexity from your code).
One way of doing this reading and writing is blocking. Using that system, when you call recv(), for example, if there is data waiting in the system then it will be returned immediately. However, if there is no data waiting then the call blocks - that is, your program halts until there is data to read. Sometimes you can do this with a timeout, but in pure blocking IO then you really can wait forever until the other end either sends some data or closes the connection.
This doesn't work too badly for some simple cases, but only where you're talking to one other machine - when you're talking on more than one socket, you can't just wait for data from one machine because the other one may be sending you stuff. There are also other issues which I won't cover in too much detail here - suffice to say it's not a good approach.
One solution is to use different threads for each connection, so the blocking is OK - other threads for other connections can be blocked without affecting each other. In this case you'd need two threads for each connection, one to read and one to write. However, threads can be tricky beasts - you need to carefully synchronise your data between them, which can make coding a little complicated. Also, they're somewhat inefficient for a simple task like this.
The select module allows you a single-threaded solution to this problem - instead of blocking on a single connection, it allows you a function which says "go to sleep until at least one of these sockets has some data I can read on it" (that's a simplification which I'll correct in a moment). So, once that call to select.select() returns, you can be certain that one of the connections you're waiting on has some data, and you can safely read it (even with blocking IO, if you're careful - since you're sure there's data there, you won't ever block waiting for it).
When you first start your application, you have only a single socket which is your listening socket. So, you only pass that in the call to select.select(). The simplification I made earlier is that actually the call accepts three lists of sockets for reading, writing and errors. The sockets in the first list are watched for reading - so, if any of them have data to read, the select.select() function returns control to your program. The second list is for writing - you might think you can always write to a socket, but actually if the other end of the connection isn't reading data fast enough then your system's write buffer can fill up and you can temporarily be unable to write. It looks like the person who gave you your code ignored this complexity, which isn't too bad for a simple example because usually the buffers are big enough you're unlikely to hit problems in simple cases like this, but it's an issue you should address in the future once the rest of your code works. The final list is watched for errors - this isn't widely used, so I'll skip it for now. Passing the empty list is fine here.
At this point someone connects to your server - as far as select.select() is concerned this counts as making the listen socket "readable", so the function returns and the list of readable sockets (the first return value) will include the listen socket.
The next part runs over all the connections which have data to read, and you can see the special case for your listen socket s. The code calls accept() on it which will take the next waiting new connection from the listen socket and turn it into a brand new socket for that connection (the listen socket continues to listen and may have other new connections also waiting on it, but that's fine - I'll cover this in a second). The brand new socket is added to the connections list and that's the end of handling the listen socket - the continue will move on to the next connection returned from select.select(), if any.
For other connections that are readable, the code calls recv() on them to recover the next 1024 bytes (or whatever is available if less than 1024 bytes). Important note - if you hadn't used select.select() to make sure the connection was readable, this call to recv() could block and halt your program until data arrived on that specific connection - hopefully this illustrates why the select.select() is required.
Once some data has been read the code runs over all the other connections (if any) and uses the send() method to copy the data down them. The code correctly skips the same connection as the data just arrived on (that's the business about q != i) and also skips s, but as it happens this isn't required since as far as I can see it's never actually added to the connections list.
Once all readable connections have been processed, the code returns to the select.select() loop to wait for more data. Note that if a connection still has data, the call returns immediately - this is why accepting only a single connection from the listen socket is OK. If there are more connections, select.select() will return again immediately and the loop can handle the next available connection. You can use non-blocking IO to make this a bit more efficient, but it makes things more complicated so let's keep things simple for now.
This is a reasonable illustration, but unfortunately it suffers from some problems:
As I mentioned, the code assumes you can always call send() safely, but if you have one connection where the other end isn't receiving properly (maybe that machine is overloaded) then your code here could fill up the send buffer and then hang when it tries to call send().
The code doesn't cope with connections closing, which will often result in an empty string being returned from recv(). This should result in the connection being closed and removed from the connections list, but this code doesn't do it.
I've updated the code slightly to try and solve these two issues:
connections = []
buffered_output = {}
while True:
rlist,wlist,xlist = select.select(connections + [s],buffered_output.keys(),[])
for i in rlist:
if i == s:
conn,addr = s.accept()
connections.append(conn)
continue
try:
data = i.recv(1024)
except socket.error:
data = ""
if data:
for q in connections:
if q != i:
buffered_output[q] = buffered_output.get(q, b"") + data
else:
i.close()
connections.remove(i)
if i in buffered_output:
del buffered_output[i]
for i in wlist:
if i not in buffered_output:
continue
bytes_sent = i.send(buffered_output[i])
buffered_output[i] = buffered_output[i][bytes_sent:]
if not buffered_output[i]:
del buffered_output[i]
I should point out here that I've assumed that if the remote end closes the connection, we also want to close immediately here. Strictly speaking this ignores the potential for TCP half-close, where the remote end has sent a request and closes its end, but still expects data back. I believe very old versions of HTTP used to sometimes do this to indicate the end of the request, but in practice this is rarely used any more and probably isn't relevant to your example.
Also it's worth noting that a lot of people make their sockets non-blocking when using select - this means that a call to recv() or send() which would otherwise block will instead return an error (raise an exception in Python terms). This is done partly for safety, to make sure a careless bit of code doesn't end up blocking the application; but it also allows some slightly more efficient approaches, such as reading or writing data in multiple chunks until there's none left. Using blocking IO this is impossible because the select.select() call only guarantees there's some data to read or write - it doesn't guarantee how much. So you can only safely call a blocking send() or recv() once on each connection before you need to call select.select() again to see whether you can do so again. The same applies to the accept() on a listening socket.
The efficiency savings are only generally a problem on systems which have a large number of busy connections, however, so in your case I'd keep things simple and not worry about blocking for now. In your case, if your application seems to hang up and become unresponsive then chances are you're doing a blocking call somewhere where you shouldn't.
Finally, if you want to make this code portable and/or faster, it might be worth looking at something like libev, which essentially has several alternatives to select.select() which work well on different platforms. The principles are broadly similar, however, so it's probably best to focus on select for now until you get your code running, and the investigate changing it later.
Also, I note that a commenter has suggested Twisted which is a framework which offers a higher-level abstraction so that you don't need to worry about all of the details. Personally I've had some issues with it in the past, such as it being difficult to trap errors in a convenient way, but many people use it very successfully - it's just an issue of whether their approach suits the way you think about things. Worth investigating at the very least to see whether its style suits you better than it does me. I come from a background writing networking code in C/C++ so perhaps I'm just sticking to what I know (the Python select module is quite close to the C/C++ version on which it's based).
Hopefully I've explained things sufficiently there - if you still have questions, let me know in the comments and I can add more detail to my answer.

Memory bounds in twisted applications

Consider the following scenario: A process on the server is used to handle data from a network connection. Twisted makes this very easy with spawnProcess and you can easily connect the ProcessTransport with your protocol on the network side.
However, I was unable to determine how Twisted handles a situation where the data from the network is available faster than the process performs reads on its standard input. As far as I can see, Twisted code mostly uses an internal buffer (self._buffer or similar) to store unconsumed data. Doesn't this mean that concurrent requests from a fast connection (eg. over local gigabit LAN) could fill up main memory and induce heavy swapping, making the situation even worse? How can this be prevented?
Ideally, the internal buffer would have an upper bound. As I understand it, the OS's networking code would automatically stall the connection/start dropping packets if the OS's buffers are full, which would slow down the client. (Yes I know, DoS on the network level is still possible, but this is a different problem). This is also the approach I would take if implementing it myself: just don't read from the socket if the internal buffer is full.
Restricting the maximum request size is also not an option in my case, as the service should be able to process files of arbitrary size.
The solution has two parts.
One part is called producers. Producers are objects that data comes out of. A TCP transport is a producer. Producers have a couple useful methods: pauseProducing and resumeProducing. pauseProducing causes the transport to stop reading data from the network. resumeProducing causes it to start reading again. This gives you a way to avoid building up an unbounded amount of data in memory that you haven't processed yet. When you start to fall behind, just pause the transport. When you catch up, resume it.
The other part is called consumers. Consumers are objects that data goes in to. A TCP transport is also a consumer. More importantly for your case, though, a child process transport is also a consumer. Consumers have a few methods, one in particular is useful to you: registerProducer. This tells the consumer which producer data is coming to it from. The consumer can them call pauseProducing and resumeProducing according to its ability to process the data. When a transport (TCP or process) cannot send data as fast as a producer is asking it to send data, it will pause the producer. When it catches up, it will resume it again.
You can read more about producers and consumers in the Twisted documentation.

Categories