I have a producer sending data using PULL / PUSH to multiple workers. All the workers need to receive all their data before performing a computation task.
I tried a sync using a PUB / SUB socket sending a "go" but as the PUSH socket are non-blocking, the go is received before the end of the datastream...
Sender :
context = zmq.Context()
push_socket = self.context.socket(zmq.PUSH)
push_socket.bind("tcp://127.0.0.1:5557")
pull_socket = self.context.socket(zmq.PULL)
pull_socket.bind("tcp://127.0.0.1:5558")
for index, data in range(100):
push_socket.send_json({"data": data, "id": index})
pub_socket.send_json({"command": "map"})
Receiver :
# recieve work
consumer_receiver = context.socket(zmq.PULL)
consumer_receiver.connect("tcp://127.0.0.1:5557")
# receive commands
consumer_command = context.socket(zmq.SUB)
consumer_command.subscribe("")
consumer_command.connect("tcp://127.0.0.1:5559")
poller = zmq.Poller()
poller.register(consumer_receiver, zmq.POLLIN)
poller.register(consumer_command, zmq.POLLIN)
while True:
events = dict(poller.poll(100))
if consumer_command in events:
received = consumer_command.recv_json()
command = received["command"]
print("received command : ", command)
if consumer_receiver in events:
received = consumer_receiver.recv_json()
print("received data", received)
Receiver output :
received data {'data': ['Hi'], 'id': 0}
received command : map
received data {'data': ['hi'], 'id': 1}
...
I would like to have:
received data {'data': ['Hi'], 'id': 0}
received data {'data': ['hi'], 'id': 1}
...
received command : map
I tried to set a HWM of 1 for the PUSH socket but it didn't work.
How can I send a synchronization message to all workers after the PUSH is finished ?
You are seeking to implement a barrier.
ZeroMQ is all about Actor model programming, and one characteristic is that there is no explicit rendevous implied in sending and receiving messages. That is, a send will return regardless of whether or not the other end has read the message.
So this means that a barrier (a type of rendevous) has to be synthesised on top of ZeroMQ's Actor model.
Use a PUSH / PULL socket pair to get the data to the workers.
Use a separate PUSH / PULL socket pair for the workers to send back a "I have the data and am ready to proceed" message to the producer.
Have the producer wait for these "I can proceed" messages,
When it has received one from every worker, send a "go" message on the PUB / SUB socket to the workers.
Communicating Sequential Processes
Simply out of interest you may wish to compare Actor model programming with Communicating Sequential Processes (which in Rust, Erlang, and (I think?) Go is making something of a comeback). In CSP sending / receiving message is a rendevous. This has several benefits;
the sender knows that a message has been received and not just queued,
it forces one to properly address architecture and resource allocation if one has performance and latency goals. You can't hide messages in transit. So if one had not supplied enough workers the producer would very obviously not be able to offload messages; the deficiency cannot be temporarily hidden by increased latency.
if you have managed to construct an architecture that can deadlock, livelock, etc it always will. Whereas an Actor model architecture can appear to be perfectly fine for years until that one day when the network gets a little busier.
To do what you want with CSP, you'd be able to omit steps 2 and 3 above. The Producer would know that every worker had received its data when the send to the last worker returned, and the "go" can be sent out immediately.
Personally speaking I really wish ZeroMQ would have the option to be CSP, not Actor. Then it would be fabulous, instead of being just pretty tremendous. What makes it really good is that it doesn't matter whether it's tcp, ipc, inproc, etc. it all behaves the same (with speed variations obviously).
AFAIK Rust, Erlang and Go CSP channels go no further than the process. ZMQ can be inter and/or intra process and/or inter computer, which makes it highly suitable for developing systems that may outgrow one computer. Need to offload a thread to another computer? Change the connection string, no other code changes required. Very nice.
You are using separate streams for command and data - this will always guarantee synchronization problems. On recipient side, you will have two stream buffers - first with a lot of data to handle, second with only command and poll() will make sure you are notified that both are ready to be read.
I see two ways to handle this problem:
1) Keep it simple: use only one stream. Everything you send on the end will be received on the end. TCP guarantees that. If you're using json, you can just add to it 'type': 'command' or 'type': 'data' to discriminate between message types.
2) If, for some reason, you really need two streams (e.g. you really want to play with publisher/subscriber pattern), receiver should acknowledge to the sender reception of the last data batch, before sender can send its command. This option would also be the choice if all workers need to receive their data before any of them is started with the command.
Related
First, of, I've read around a fair amount of time including many threads on this site, however I still need some clarification on Sockets, TCP and Networking in Python, as I feel like I don't fully understand what's happening in my program.
I'm sending data from a server to a client via an Unix Domain Socket (AF_UNIX) using TCP (SOCK_STREAM).
On the server side, a process is continuously putting items on a Queue.Queue and another process is sending items to the client by running
while True:
conn.sendall(queue.get())
On the client side, data is read by running
while True:
conn.recv(1024)
# time.sleep(10)
Now, I emulate a slow client by sending the client process to sleep after every call on recv(). What I expect is that the queue on the server side is filled up, since send() should block because the client can't read off data fast enough.
I monitor the number of items send to the client as well as the queue size. What I notice is that several dozen messages (roughly depending on the size of the messages, but slightly different message sizes might behave the same) are sent to the client (which are received by the client with delay, due to time.seep()) before the queue starts to fill up.
What is happening here? Why is send() not blocking immediately?
I suspect that some sort of network or file buffer is involved, which queues the send items and fills up before my implemented queue.
There are a number of buffers in various places in the system, on both the sender and the receiver. Your call to a sending function won't block until all those buffers are filled up. When the receiver drains some of the buffers, data will flow again and eventually it will unblock the send call.
Typically there's a buffer in the sender holding data waiting to be put on the wire, a buffer "in flight" allowing a certain number of bytes to be send before having to wait for the receiver to acknowledge, and lastly receive buffers holding data that has been acknowledged but not yet delivered to the receiving application.
Were this not so, forward progress would be extremely limited. The sender would be stuck waiting to send until the receiver called receive. Then, whichever one finishes first would have to wait for the other one. Even if the sender was finished first, it couldn't make any forward progress at all until the receiver finished processing the previous chunk of data. That would be quite sub-optimal for most applications.
I've got a program which receives information from about 10 other (sensor reading) programs (all controlled by myself). I now want to make them communicate using ZeroMQ.
For most of the queues the important thing is that the central receiving program always has the latest sensor data, all older messages are not important anymore. If a couple messages get lost I don't care. So for all of them I started out with a separate PUB/SUB socket; one for each program. But I'm not sure if that is the right way to do it. As far as I understand I have two options:
Make a separate socket for every program and read them out in a loop. That way I know by the socket what the information is I'm receiving (I'm often just sending an int).
Make one socket to which all the programs connect, and with every message I send a string which tells the receiving end what the message is about.
All connections are on a PUB/SUB basis, so creating one socket would well work out. I'm just not sure if that is the most efficient way to do it.
All tips are welcome!
- PUB/SUB is fine and allows an easy conversion from N-sensors:1-logger into N-sensors:2+-loggers- one might also benefit from a conceptual separation of a socket from an access-port, where more than one sockets may get connected
How to get always JUST THE ACTUAL ( LAST ) SENSOR READOUT:
If not bound, due to system-integration constraints, to some early ZeroMQ API, there is a lovely feature exactly for this via a .setsockopt( ZMQ_CONFLATE, True ) method:
ZMQ_CONFLATE: Keep only last message
If set, a socket shall keep only one message in its inbound/outbound queue, this message being the last message received/the last message to be sent. Ignores ZMQ_RCVHWM and ZMQ_SNDHWM options. Does not support multi-part messages, in particular, only one part of it is kept in the socket internal queue.
On design dilemma:
Unless your real-time control stability introduces some hard-real-time limit, the PUB-side freely decides, how often a new value is instructed to .send() to SUB(-s). Here no magic is needed, the less with ZMQ_CONFLATE option set on the internal outgoing queue managed.
The SUB(-s) side receiver(s) will also benefit from the ZMQ_CONFLATE option set on the internal incoming queue managed, but given a set of individual .bind()-s instantiate separate landing ports for delivery of different individual sensoric readouts, your "last" values will remain consistently the "last"-readouts. If all readouts would go into a common landing pad, your receiving process will get masked-out ( lost ) all readouts but the one that was just accidentally the "last" right before .recv() took place, which would not help much, would it?
If some I/O-performance related tweaking becomes necessary, the .Context( n_IO_threads ) + ZMQ_AFFINITY-mapping options may increase and prioritise the resources the ioDataPump may harness for increased IO-performance
Unless you're up against a tight real time requirement there's not much point in having more sockets than necessary. ZMQ's fair queuing ought to take care of giving each sensor program equal attention (see Figure 6 in the guide)
If your sensor programs are on other devices connected by Ethernet, the ultimate performance of your programs is limited by the bandwidth of the Ethernet NIC in your computer. A single thread program handling a single PULL socket stands a good chance of being able to process the data coming in faster than it can transit the NIC.
If that's so, then you may as well stick to a single socket and enjoy the simpler code. It's not very hard dealing with multiple sockets, but it's far easier to deal with one. For example, with one single socket you don't have to tell each sensor program what network port to connect to - it can be a constant.
PUSH/PULL sounds like a more natural pattern for your situation than PUB/SUB, but that won't make much difference.
Lastness
Lastness is going to be your (potential) problem. The whole point of things like ZMQ is that they will deliver messages in the order they're sent. Thus you read a message, it is by definition the "last" message so far as the recipient is concerned. The recipient has no idea as to whether or not there is another message on the way, in transit.
This is a feature of Actor model architectures (which is what ZMQ is). Messages get buffered up in the transport, and there's no information about the newness of the message to be learned when it's read. All you know is that it was sent some time beforehand. There is no execution rendezvous with the sender.
Now, you either process it as if it is the last message, or you wait for a period of time to see if another one comes along before processing it. The easiest thing to do is to simply process each message as if it is the last.
Contrast this with a Communicating Sequential Processes architecture. It's basically the same as an Actor model architecture, except that the transport does not buffer messages. Message sends block until the recipient has called message read.
Thus when you read a message, the recipient knows that it the last one sent by the sender. And the sender knows that the message it has sent has been received at that very instant by the recipient. So the knowledge of lastness is absolute - the message received really is the last one sent.
However, unless you have something fairly heavyweight going on I wouldn't worry about it. You are quite likely to be able to keep up with your sensor data stream even if the messages you're reading aren't the latest in the queue.
You can nearly make ZMQ into CSP by setting the high water limit on the sending end's socket to 1. That means that you can buffer up at most 1 message. That's not the same as 0, and unfortunately setting the HWM to 0 means "unlimited size buffer".
I'm trying to implement a distributed PUSH/PULL (some kinda MapReduce) model with Python and ZMQ, as it's described here: http://taotetek.net/2011/02/02/python-multiprocessing-with-zeromq/ . In this example, result_manager knows exactly how many messages to wait and when to send "FINISHED" state to workers.
Let's assume I have a big but finite stream of data of unknown length. In this case I can't know exactly where to stop. I tried to send "FINISHED" from ventilator in the end instead of result_manager, but, of course, workers receive it in the middle of processing (due to the fact that it's a separate channel) and die immediately, so a lot of data is lost.
Otherwise, if I use the same work_message queue to send "FINISHED" state - it's being captured by first available worker while others hang, that's also as expected.
Is there any other model I should use here? Or can you please point me to some best practices for this case?
Otherwise, if I use the same work_message queue to send "FINISHED"
state - it's being captured by first available worker while others
hang, that's also as expected.
You can easily work around this.
Send "FINISH" from VENTILATOR to RESULT_MANAGER PULL socket.
RESULT_MANAGER receives "FINISH" and publishes this message to all WORKERS through PUB socket.
All WORKERS receive "FINISH" message on SUB sockets and kill themselves.
Here you have example code from the ZMQ Guide how to send sth from VENTILATOR to RESULT_MANAGER in devide and conquer design pattern.
My problem is kind of trying to half-close a zmq socket.
In simple terms I have a pair of PUSH/PULL sockets in Python.
The PUSH socket never stops sending, but the PULL socket should be able to clean itself up in a following way:
Stop accepting any additional messages to the queue
Process the messages still in the queue
Close the socket etc.
I don't want to affect the PUSH socket in any way, it can keep accumulating its own queue until another PULL socket comes around or that might be there already. The LINGER option doesn't seem to work with recv() (just with send()).
One option might be to have a broker in between with the broker PUSH and receiver PULL HWM set to zero. Then the broker's PULL would accumulate the messages. However, I'd rather not do this. Is there any other way?
I believe you are confusing which socket type will queue messages. According to the zmq_socket docs, a PUSH socket will queue its messages but a PULL socket doesn't have any type of queuing mechanism.
So what you're asking to be able to do would be something of the following:
1) Stop recv'ing any additional messages to the PULL socket.
2) Close the socket etc.
The PUSH socket will continue to 'queue' its messages automatically until either the HWM is met (at which it will then block and not queue any more messages) or a PULL socket comes along and starts recv'ing messages.
The case I think you're really concerned about is a slow PULL reader. In which you would like to get all of the currently queued messages in the PUSH socket (at once?) and then quit. This isn't how zmq works, you get one message at a time.
To implement something of this sort, you'll have to wrap the PULL capability with your own queue. You 'continually' PULL the messages into your personal queue (in a different thread?) until you want to stop, then process those messages and quit.
I started using ZeroMQ this week, and when using the Request-Response pattern I am not sure how to have a worker safely "hang up" and close his socket without possibly dropping a message and causing the customer who sent that message to never get a response. Imagine a worker written in Python who looks something like this:
import zmq
c = zmq.Context()
s = c.socket(zmq.REP)
s.connect('tcp://127.0.0.1:9999')
while i in range(8):
s.recv()
s.send('reply')
s.close()
I have been doing experiments and have found that a customer at 127.0.0.1:9999 of socket type zmq.REQ who makes a fair-queued request just might have the misfortune of having the fair-queuing algorithm choose the above worker right after the worker has done its last send() but before it runs the following close() method. In that case, it seems that the request is received and buffered by the ØMQ stack in the worker process, and that the request is then lost when close() throws out everything associated with the socket.
How can a worker detach "safely" — is there any way to signal "I don't want messages anymore", then (a) loop over any final messages that have arrived during transmission of the signal, (b) generate their replies, and then (c) execute close() with the guarantee that no messages are being thrown away?
Edit: I suppose the raw state that I would want to enter is a "half-closed" state, where no further requests could be received — and the sender would know that — but where the return path is still open so that I can check my incoming buffer for one last arrived message and respond to it if there is one sitting in the buffer.
Edit: In response to a good question, corrected the description to make the number of waiting messages plural, as there could be many connections waiting on replies.
You seem to think that you are trying to avoid a “simple” race condition such as in
... = zmq_recv(fd);
do_something();
zmq_send(fd, answer);
/* Let's hope a new request does not arrive just now, please close it quickly! */
zmq_close(fd);
but I think the problem is that fair queuing (round-robin) makes things even more difficult: you might already even have several queued requests on your worker. The sender will not wait for your worker to be free before sending a new request if it is its turn to receive one, so at the time you call zmq_send other requests might be waiting already.
In fact, it looks like you might have selected the wrong data direction. Instead of having a requests pool send requests to your workers (even when you would prefer not to receive new ones), you might want to have your workers fetch a new request from a requests queue, take care of it, then send the answer.
Of course, it means using XREP/XREQ, but I think it is worth it.
Edit: I wrote some code implementing the other direction to explain what I mean.
I think the problem is that your messaging architecture is wrong. Your workers should use a REQ socket to send a request for work and that way there is only ever one job queued at the worker. Then to acknowledge completion of the work, you could either use another REQ request that doubles as ack for the previous job and request for a new one, or you could have a second control socket.
Some people do this using PUB/SUB for the control so that each worker publishes acks and the master subscribes to them.
You have to remember that with ZeroMQ there are 0 message queues. None at all! Just messages buffered in either the sender or receiver depending on settings like High Water Mark, and type of socket. If you really do need message queues then you need to write a broker app to handle that, or simply switch to AMQP where all communication is through a 3rd party broker.
I've been thinking about this as well. You may want to implement a CLOSE message which notifies the customer that the worker is going away. You could then have the worker drain for a period of time before shutting down. Not ideal, of course, but might be workable.
There is a conflict of interest between sending requests as rapidly as possible to workers, and getting reliability in case a worked crashes or dies. There is an entire section of the ZeroMQ Guide that explains different answers to this question of reliability. Read that, it'll help a lot.
tl;dr workers can/will crash and clients need a resend functionality. The Guide provides reusable code for that, in many languages.
Wouldn't the simplest solution be to have the customer timeout when waiting for the reply and then retry if no reply is received?
Try sleeping before the call to close. This is fixed in 2.1 but not in 2.0 yet.