Memory bounds in twisted applications - python

Consider the following scenario: A process on the server is used to handle data from a network connection. Twisted makes this very easy with spawnProcess and you can easily connect the ProcessTransport with your protocol on the network side.
However, I was unable to determine how Twisted handles a situation where the data from the network is available faster than the process performs reads on its standard input. As far as I can see, Twisted code mostly uses an internal buffer (self._buffer or similar) to store unconsumed data. Doesn't this mean that concurrent requests from a fast connection (eg. over local gigabit LAN) could fill up main memory and induce heavy swapping, making the situation even worse? How can this be prevented?
Ideally, the internal buffer would have an upper bound. As I understand it, the OS's networking code would automatically stall the connection/start dropping packets if the OS's buffers are full, which would slow down the client. (Yes I know, DoS on the network level is still possible, but this is a different problem). This is also the approach I would take if implementing it myself: just don't read from the socket if the internal buffer is full.
Restricting the maximum request size is also not an option in my case, as the service should be able to process files of arbitrary size.

The solution has two parts.
One part is called producers. Producers are objects that data comes out of. A TCP transport is a producer. Producers have a couple useful methods: pauseProducing and resumeProducing. pauseProducing causes the transport to stop reading data from the network. resumeProducing causes it to start reading again. This gives you a way to avoid building up an unbounded amount of data in memory that you haven't processed yet. When you start to fall behind, just pause the transport. When you catch up, resume it.
The other part is called consumers. Consumers are objects that data goes in to. A TCP transport is also a consumer. More importantly for your case, though, a child process transport is also a consumer. Consumers have a few methods, one in particular is useful to you: registerProducer. This tells the consumer which producer data is coming to it from. The consumer can them call pauseProducing and resumeProducing according to its ability to process the data. When a transport (TCP or process) cannot send data as fast as a producer is asking it to send data, it will pause the producer. When it catches up, it will resume it again.
You can read more about producers and consumers in the Twisted documentation.

Related

Producer-consumer architecture over network in python?

I need a producer-consumer kind of architecture, where the producer puts data in a queue over and over, and then a consumer reads from that queue as fast as it can process the data.
For the producer and consumer running in separate processes we already have multiprocessing, with Queue where you have put and get. So even if the producer runs as 2-3 times the speed of the consumer, all the data is in the queue (assume memory use is not a problem) and the consumer just calls q.get whenever it needs to.
But I need the producer and consumer to be connected over a network, so probably tough a socket (but I am open to other methods). The big problem with sockets is that they do not separate objects automatically like queues do.
For a multiprocessing.Queue if I call q.get I get the next object, the queue takes care of how many bytes to read and recreates the object for me, q.get just returns the object. With a socket I have to pickle.dumps to send it and then I need to be careful how many bytes to read from the socket (in case there is more than 1 object in the socket) and then pickle.loads the result. The main problem is keeping track of object sizes.
If I put 10 objects of different sizes that add up to 1000 bytes in a Queue then the queue takes care of how many bytes to read for every object when calling q.get. For a socket if I pickle the 10 objects and send them, the socket has no idea how to split the big 1000 byte string inside it, and creating a mechanism for this means adding alot of new code.
Is there some kind of... socket-based Queue or similar?
This is usually solved with an external software that will act as a broker for the producer and consumer over the internet. There are a few open source projects you can look into;
RabbitMQ
Kafka
Redis
Celery
They are all different in their own way, but they all have Python libraries you can easily pip install to begin using them. All of them will require that a third process is running to serve as the broker of messages.
Similarly, there are paid products for this as well - typically hosted in one of the big cloud providers - like AWS SQS.
This is not to say that it is not possible to create a custom socket or server implementation to do this... but, a lot of times in programming, it's best not to try to rebuild the wheel.

ZeroMQ: How to construct simple asynchronous broker? Seems impossible

I am building a simple star-like client-server topology.
The idea is that clients connect to the server, can send messages, and the server can send messages to them, when the server decides to. There will be a relatively small number of clients, about 30, but so many that it is not sensible to send all outgoing data to all. I'm sure I'm just boneheaded, but this seems to be completely impossible with ZeroMQ.
The last part is the reason this question does not provide answer.
The catch is this :
I can use a ROUTER socket to receive messages from clients. This also carries identification. However, I cannot use the same socket for sending, since ZeroMQ sockets are not threadsafe. I.e. I can't have one thread waiting for incoming messages, and another sending outgoing from the server itself. I am not aware of any way I could wait in blocking for both - socket.recv(), and for example .get() on a queue - at the same time on a single thread in python. Maybe there is a way to do that.
Using two sockets - one incoming one outgoing - doesn't work either. The identification is not shared between sockets, and so the sending socket would still have to be polled to obtain client id mapping, if even for once. We obviously can't use own port for each client. There seems to be no way for the server to send a message to a single client out of it's own volition.
(subscription topics are a dead idea too: message filtering is performed on client-side, and the server would just flood all client networks)
In the end TCP sockets can handle this sort of asynchronous situation easily, but effective message framing on python is a nightmare to build. All I'm essentially after is a reliable socket that handles messages, and has well defined failure modes.
I don't know Python but for C/C++ I would use zmq_poll(). There are several options, depending on your requirements.
Use zmq_poll() to wait for messages from clients. If a message arrives, process it. Also use a time-out. When the time-out expires, check if you need to send messages to clients and send them.
zmq_poll() can also wait on general file descriptors. You can use some type of file descriptor and trigger it (write to it) from another process or thread when you have a message to send to a client. If this file descriptor is triggered, send messages to clients.
Use ZeroMQ sockets internally inside your server. Use zmq_poll() to wait both on messages from clients and internal processes or threads. If the internal sockets are triggered, send messages to clients.
You can use the file descriptor or internal ZeroMQ sockets just for triggering but you can also send the message content through the file descriptor or ZeroMQ socket.
Q : "ZeroMQ: How to construct simple asynchronous broker?"
The concept builds on a few assumptions that are not supported or do not hold :
a)Python threads actually never execute concurrently, they are re-[SERIAL]-ised into a sequence of soloists execution blocks & for any foreseeable future will remain such, since ever & forever (as Guido van ROSSUM has explained this feature to be a pyramidal reason for collision prevention - details on GIL-lock, serving this purpose, are countless )
b)ZeroMQ thread-safeness has nothing to do with using a blocking-mode for operations.
c)ZeroMQ PUB/SUB archetype does perform a topic-filtering, yet in different versions on different sides of the "ocean" :
Until v3.1, subscription mechanics ( a.k.a. a TOPIC-filter ) was handled on the SUB-side, so this part of the processing got distributed among all SUB-s ( at a cost of uniformly wide data-traffic across all transport-classes involved ) and there was no penalty, except for a sourcing such data-flow related workload ... on the PUB-side.
Since v3.1, the TOPIC-filter is processed on the PUB-side, at a cost of such a processing overhead & memory allocations, but saving all the previously wasted transport-capacities, consumed just to later realise at the SUB-side the message is not matching the TOPIC-filter and will be disposed off.
Using a .poll()-based & zmq.NOBLOCK-modes of .recv()- & .send()-methods in the code design will never leave one in ambiguous, the less in an unsalvagable deadlock waiting-state and adds the capability to design even a lightweight priority-driven soft-scheduler for doing so with different relative priority levels.
Given your strong exposure in realtime systems, you might like to have a read into this to review the ZeroMQ Framework properties.

Interprocess communication with SPSC queue in python

I have multiple write-heavy Python applications (producer1.py, producer2.py, ...) and I'd like to implement an asynchronous, non-blocking writer (consumer.py) as a separate process, so that the producers are not blocked by disk access or contention.
To make this more easily optimizable, assume I just need to expose a logging call that passes a fixed length string from a producer to the writer, and the written file does not need to be sorted by call time. And the target platform can be Linux-only. How should I implement this with minimal latency penalty on the calling thread?
This seems like an ideal setup for multiple lock-free SPSC queues but I couldn't find any Python implementations.
Edit 1
I could implement a circular buffer as a memory-mapped file on /dev/shm, but I'm not sure if I'll have atomic CAS in Python?
The simplest way would be using an async TCP/Unix Socket server in consumer.py.
Using HTTP will be an overhead in this case.
A producer, TCP/Unix Socket client, will send data to consumer then consumer will respond right away before writing data in disk drive.
File IO in consumer are blocking but it will not block producers as stated above.

ZeroMQ: socket per data type or just one socket?

I've got a program which receives information from about 10 other (sensor reading) programs (all controlled by myself). I now want to make them communicate using ZeroMQ.
For most of the queues the important thing is that the central receiving program always has the latest sensor data, all older messages are not important anymore. If a couple messages get lost I don't care. So for all of them I started out with a separate PUB/SUB socket; one for each program. But I'm not sure if that is the right way to do it. As far as I understand I have two options:
Make a separate socket for every program and read them out in a loop. That way I know by the socket what the information is I'm receiving (I'm often just sending an int).
Make one socket to which all the programs connect, and with every message I send a string which tells the receiving end what the message is about.
All connections are on a PUB/SUB basis, so creating one socket would well work out. I'm just not sure if that is the most efficient way to do it.
All tips are welcome!
- PUB/SUB is fine and allows an easy conversion from N-sensors:1-logger into N-sensors:2+-loggers- one might also benefit from a conceptual separation of a socket from an access-port, where more than one sockets may get connected
How to get always JUST THE ACTUAL ( LAST ) SENSOR READOUT:
If not bound, due to system-integration constraints, to some early ZeroMQ API, there is a lovely feature exactly for this via a .setsockopt( ZMQ_CONFLATE, True ) method:
ZMQ_CONFLATE: Keep only last message
If set, a socket shall keep only one message in its inbound/outbound queue, this message being the last message received/the last message to be sent. Ignores ZMQ_RCVHWM and ZMQ_SNDHWM options. Does not support multi-part messages, in particular, only one part of it is kept in the socket internal queue.
On design dilemma:
Unless your real-time control stability introduces some hard-real-time limit, the PUB-side freely decides, how often a new value is instructed to .send() to SUB(-s). Here no magic is needed, the less with ZMQ_CONFLATE option set on the internal outgoing queue managed.
The SUB(-s) side receiver(s) will also benefit from the ZMQ_CONFLATE option set on the internal incoming queue managed, but given a set of individual .bind()-s instantiate separate landing ports for delivery of different individual sensoric readouts, your "last" values will remain consistently the "last"-readouts. If all readouts would go into a common landing pad, your receiving process will get masked-out ( lost ) all readouts but the one that was just accidentally the "last" right before .recv() took place, which would not help much, would it?
If some I/O-performance related tweaking becomes necessary, the .Context( n_IO_threads ) + ZMQ_AFFINITY-mapping options may increase and prioritise the resources the ioDataPump may harness for increased IO-performance
Unless you're up against a tight real time requirement there's not much point in having more sockets than necessary. ZMQ's fair queuing ought to take care of giving each sensor program equal attention (see Figure 6 in the guide)
If your sensor programs are on other devices connected by Ethernet, the ultimate performance of your programs is limited by the bandwidth of the Ethernet NIC in your computer. A single thread program handling a single PULL socket stands a good chance of being able to process the data coming in faster than it can transit the NIC.
If that's so, then you may as well stick to a single socket and enjoy the simpler code. It's not very hard dealing with multiple sockets, but it's far easier to deal with one. For example, with one single socket you don't have to tell each sensor program what network port to connect to - it can be a constant.
PUSH/PULL sounds like a more natural pattern for your situation than PUB/SUB, but that won't make much difference.
Lastness
Lastness is going to be your (potential) problem. The whole point of things like ZMQ is that they will deliver messages in the order they're sent. Thus you read a message, it is by definition the "last" message so far as the recipient is concerned. The recipient has no idea as to whether or not there is another message on the way, in transit.
This is a feature of Actor model architectures (which is what ZMQ is). Messages get buffered up in the transport, and there's no information about the newness of the message to be learned when it's read. All you know is that it was sent some time beforehand. There is no execution rendezvous with the sender.
Now, you either process it as if it is the last message, or you wait for a period of time to see if another one comes along before processing it. The easiest thing to do is to simply process each message as if it is the last.
Contrast this with a Communicating Sequential Processes architecture. It's basically the same as an Actor model architecture, except that the transport does not buffer messages. Message sends block until the recipient has called message read.
Thus when you read a message, the recipient knows that it the last one sent by the sender. And the sender knows that the message it has sent has been received at that very instant by the recipient. So the knowledge of lastness is absolute - the message received really is the last one sent.
However, unless you have something fairly heavyweight going on I wouldn't worry about it. You are quite likely to be able to keep up with your sensor data stream even if the messages you're reading aren't the latest in the queue.
You can nearly make ZMQ into CSP by setting the high water limit on the sending end's socket to 1. That means that you can buffer up at most 1 message. That's not the same as 0, and unfortunately setting the HWM to 0 means "unlimited size buffer".

Python/Twisted - Sending to a specific socket object?

I have a "manager" process on a node, and several worker processes. The manager is the actual server who holds all of the connections to the clients. The manager accepts all incoming packets and puts them into a queue, and then the worker processes pull the packets out of the queue, process them, and generate a result. They send the result back to the manager (by putting them into another queue which is read by the manager), but here is where I get stuck: how do I send the result to a specific socket? When dealing with the processing of the packets on a single process, it's easy, because when you receive a packet you can reply to it by just grabbing the "transport" object in-context. But how would I do this with the method I'm using?
It sounds like you might need to keep a reference to the transport (or protocol) along with the bytes the just came in on that protocol in your 'event' object. That way responses that came in on a connection go out on the same connection.
If things don't need to be processed serially perhaps you should think about setting up functors that can handle the data in parallel to remove the need for queueing. Just keep in mind that you will need to protect critical sections of your code.
Edit:
Judging from your other question about evaluating your server design it would seem that processing in parallel may not be possible for your situation, so my first suggestion stands.

Categories