Python multithreaded ZeroMQ REQ-REP - python

I am looking to implement a REQ-REP pattern with Python and ZeroMQ using multithreading.
With Python, I can create a new thread when a new client connects to the server. This thread will handle all communications with that particular client, until the socket is closed:
# Thread that will handle client's requests
class ClientThread(threading.Thread):
# Implementation...
def __init__(self, socket):
threading.Thread.__init__(self)
self.socket = socket
def run(self):
while keep_alive:
# Thread can receive from client
data = self.socket.recv(1024)
# Processing...
# And send back a reply
self.socket.send(reply)
while True:
# The server accepts an incoming connection
conn, addr = sock.accept()
# And creates a new thread to handle the client's requests
newthread = ClientThread(conn)
# Starting the thread
newthread.start()
Is it possible to do the same[*] using ZeroMQ? I have seen some examples of multithreading with ZeroMQ and Python, but in all of them a pool of threads is created with a fixed number of threads at the beginning and it seems to be more oriented to load balancing.
[*] Notice what I want is to keep the connection between a client and its thread alive, as the thread is expecting multiple REQ messages from the client and it will store information that must be kept between messages (i.e.: a variable counter that increments its value on a new REQ message; so each thread has its own variable and no other client should ever be able to access that thread). New client = new thread.

Yes, ZeroMQ is a powerful can-do toolbox
However, the major surprise will be, that ZeroMQ <socket>-s are by far more structured than their plain counterparts, you use in the sample.
{ aZmqContext -> aZmqSocket -> aBehavioralPrimitive }
ZeroMQ builds a remarkable, abstraction-rich framework, under a hood of a "singleton" ZMQ-Context, which is (and shall remain) the only thing used as "shared".
Threads shall not "share" any other "derived" objects, the less any their state, as there is a strong distributed-responsibility framework architecture implemented, both in the sake of clean-design and a high performance & low-latency.
For all ZMQ-Socket-s one shall rather imagine a much smarter, layered sub-structure, where one receives off-loaded worries about I/O-activities ( managed inside ZMQ-Context responsibility -- thus keep-alive issues, timing issues and fair-queue buffering / select-polling issues simply cease to be visible for you ... ), with one sort of a formal communication pattern behaviour ( given by a chosen ZMQ-Socket-type archetype ).
Finally
ZeroMQ and similarly nanomsg libraries are rather LEGO-like projects, that empower you, as an architect & designer, more than one typically realises at the very beginning.
One thus can focus on distributed-system behaviour, as opposed to lose time and energy on solving just-another-socket-messaging-[nightmare].
( Definitely worth a look into both books from Pieter Hintjens, co-father of the ZeroMQ. There you find plenty Aha!-moments on this great subject. )
... and as a cherry on a cake -- you get all of this as a Transport-agnostic, universal environment, whether passing some messages on inproc://, other over ipc:// and also in parallel listening / speaking over tcp:// layers.
EDIT#12014-08-19 17:00 [UTC+0000]
Kindly check comments below and further review your -- both elementary and advanced -- design-options for a <trivial-failure-prone>-spin-off processing, for a <load-balanced>-REP-worker queueing, for a <scale-able>-distributed processing and a <fault-resilient_mode>-REP-worker binary-start shaded processing.
No heap of mock-up SLOC(s), no single code-sample will do a One-Size-Fits-All.
This is exponentially valid in designing distributed messaging systems.
"""REQ/REP modified with QUEUE/ROUTER/DEALER add-on ---------------------------
Multithreaded Hello World server
Author: Guillaume Aubert (gaubert) <guillaume(dot)aubert(at)gmail(dot)com>
"""
import time
import threading
import zmq
print "ZeroMQ version sanity-check: ", zmq.__version__
def aWorker_asRoutine( aWorker_URL, aContext = None ):
"""Worker routine"""
#Context to get inherited or create a new one trick------------------------------
aContext = aContext or zmq.Context.instance()
# Socket to talk to dispatcher --------------------------------------------------
socket = aContext.socket( zmq.REP )
socket.connect( aWorker_URL )
while True:
string = socket.recv()
print( "Received request: [ %s ]" % ( string ) )
# do some 'work' -----------------------------------------------------------
time.sleep(1)
#send reply back to client, who asked --------------------------------------
socket.send( b"World" )
def main():
"""Server routine"""
url_worker = "inproc://workers"
url_client = "tcp://*:5555"
# Prepare our context and sockets ------------------------------------------------
aLocalhostCentralContext = zmq.Context.instance()
# Socket to talk to clients ------------------------------------------------------
clients = aLocalhostCentralContext.socket( zmq.ROUTER )
clients.bind( url_client )
# Socket to talk to workers ------------------------------------------------------
workers = aLocalhostCentralContext.socket( zmq.DEALER )
workers.bind( url_worker )
# --------------------------------------------------------------------||||||||||||--
# Launch pool of worker threads --------------< or spin-off by one in OnDemandMODE >
for i in range(5):
thread = threading.Thread( target = aWorker_asRoutine, args = ( url_worker, ) )
thread.start()
zmq.device( zmq.QUEUE, clients, workers )
# ----------------------|||||||||||||||------------------------< a fair practice >--
# We never get here but clean up anyhow
clients.close()
workers.close()
aLocalhostCentralContext.term()
if __name__ == "__main__":
main()

Related

How can I impose server priority on a UDP client receiving from multiple servers on the same port

I have a client application that is set to receive data from a given UDP port, and two servers (let's call them "primary" and "secondary") that are broadcasting data over that port.
I've set up a UDP receiver thread that uses a lossy queue to update my frontend. Lossy is okay here because the data are just status info strings, e.g. 'on'/'off', that I'm picking up periodically.
My desired behavior is as follows:
If the primary server is active and broadcasting, the client will accept data from the primary server only (regardless of data coming in from the secondary server)
If the primary server stops broadcasting, the client will accept data from the secondary server
If the primary server resumes broadcasting, don't cede to the primary unless the secondary server goes down (to prevent bouncing back and forth in the event that the primary sever is going in and out of failure)
If neither server is broadcasting, raise a flag
Currently the problem is that if both servers are broadcasting (which they will be most of the time), my client happily receives data from both and bounces back and forth between the two. I understand why this is happening, but I'm unsure how to stop it / work around it.
How can I structure my client to disregard data coming in from the secondary server as long as it's also getting data from the primary server?
NB - I'm using threads and queues here to keep my UDP operations from blocking my GUI
# EXAMPLE CLIENT APP
import queue
import socket as skt
import tkinter as tk
from threading import Event, Thread
class App(tk.Tk):
def __init__(self):
super().__init__()
self.title('UDP Client Test')
# set up window close handler
self.protocol('WM_DELETE_WINDOW', self.on_close)
# display the received value
self.data_label_var = tk.StringVar(self, 'No Data')
self.data_label = ttk.Label(self, textvariable=self.data_label_var)
self.data_label.pack()
# server IP addresses (example)
self.primary = '10.0.42.1'
self.secondary = '10.0.42.2'
self.port = 5555
self.timeout = 2.0
self.client_socket = self.get_client_socket(self.port, self.timeout)
self.dq_loop = None # placeholder for dequeue loop 'after' ID
self.receiver_queue = Queue(maxsize=1)
self.stop_event = Event()
self.receiver_thread = Thread(
name='status_receiver',
target=self.receiver_worker,
args=(
self.client_socket,
(self.primary, self.secondary),
self.receiver_queue,
self.stop_event
)
)
def get_client_socket(self, port: int, timeout: float) -> skt.socket:
"""Set up a UDP socket bound to the given port"""
client_socket = skt.socket(skt.AF_INET, skt.SOCK_DGRAM)
client_socket.settimeout(timeout)
client_socket.bind('', port) # accept traffic on this port from any IP address
return client_socket
#staticmethod
def receiver_worker(
socket: skt.socket,
addresses: tuple[str, str],
queue: queue.Queue,
stop_event: Event,
) -> None:
"""Thread worker that receives data over UDP and puts it in a lossy queue"""
primary, secondary = addresses # server IPs
while not stop_event.is_set(): # loop until application exit...
try:
data, server = socket.recvfrom(1024)
# here's where I'm having trouble - if traffic is coming in from both servers,
# there's a good chance my frontend will just pick up data from both alternately
# (and yes, I know these conditions do the same thing...for now)
if server[0] == primary:
queue.put_nowait((data, server))
elif server[0] == secondary:
queue.put_nowait((data, server))
else: # inbound traffic on the correct port, but from some other server
print('disredard...')
except queue.Full:
print('Queue full...') # not a problem, just here in case...
except skt.timeout:
print('Timeout...') # TODO
def receiver_dequeue(self) -> None:
"""Periodically fetch data from the worker queue and update the UI"""
try:
data, server = self.receiver_queue.get_nowait()
except queue.Empty:
pass # nothing to do
else: # update the label
self.data_label_var.set(data.decode())
finally: # continue updating 10x / second
self.dq_loop = self.after(100, self.receiver_dequeue)
def on_close(self) -> None:
"""Perform cleanup tasks on application exit"""
if self.dq_loop:
self.after_cancel(self.dq_loop)
self.stop_event.set() # stop the receiver thread loop
self.receiver_thread.join()
self.client_socket.close()
self.quit()
if __name__ == '__main__':
app = App()
app.mainloop()
My actual application is only slightly more complex than this, but the basic operation is the same: get data from UDP, use data to update UI...rinse and repeat
I suspect the changes need to be made to my receiver_worker method, but I'm not sure where to go from here. Any help is very much welcome and appreciated! And thanks for taking the time to read this long question!
Addendum: FWIW I did some reading about Selectors but I'm not sure how to go about implementing them in my case - if anybody can point me to a relevant example, that would be amazing
The core of the problem is: how do you determine that a given server is really offline as opposed to just temporarily taking a break, e.g. due to a momentary network glitch?
All your client really knows is whether it has received any UDP packets from a given source IP address recently or not, for some well-chosen definition of "recently". So what you can do in your client is update a per-IP-address member-variable to the current timestamp, whenever you receive a UDP packet from a given server. Then you can have a helper method like this (pseudocode):
def HowManyMillisecondsSinceTheLastUDPPacketWasReceivedFromServer(self, packetSourceIP):
{
return current_timestamp_milliseconds() - self._lastPacketReceiveTimeStamp[packetSourceIP]
}
Then e.g. if you know that your servers will be sending out a UDP packet once per second, you can decree that a given server is officially considered "offline" if you haven't received any UDP packets from it within the last 5 seconds. (Choose your own numbers here to suit, of course)
Then after you receive a packet and update the corresponding server-timestamp-member-variable, you can also update a member-variable indicating which server is the now the "active server" (i.e. the server you should currently be listening to):
def UpdateActiveServer(self)
{
millisSincePrimary = HowManyMillisecondsSinceTheLastUDPPacketWasReceivedFromServer(self._primaryServerIP)
millisSinceSecondary = HowManyMillisecondsSinceTheLastUDPPacketWasReceivedFromServer(self, _secondaryServerIP)
serverOfflineMillis = 5*1000 // 5 seconds
primaryIsOffline = (millisSincePrimary >= serverOfflineMillis)
secondaryIsOffline = (millisSinceSecondary >= serverOfflineMillis)
if ((primaryIsOffline) and (not secondaryIsOffline)):
self._usePacketsFromSecondaryServer = true
if ((secondaryIsOffline) and (not primaryIsOffline)):
self._usePacketsFromSecondaryServer = false
}
.... then the rest of your code can check the current value of self._usePacketsFromSecondaryServer to decide which incoming UDP packets to listen to and which ones to ignore (pseudocode):
def PacketReceived(whichServer):
if ((whichServer == self._primaryServerIP) and (not self._usePacketsFromSecondaryServer)) or ((whichServer == self._secondaryServerIP) and (self._usePacketsFromSecondaryServer)):
# code to parse and use UDP packet goes here

connection to two RabbitMQ servers

I'm using python with pika, and have the following two similar use cases:
Connect to RabbitMQ server A and server B (at different IP addrs with different credentials), listen on exchange A1 on server A; when a message arrives, process it and send to an exchange on server B
Open an HTTP listener and connect to RabbitMQ server B; when a specific HTTP request arrives, process it and send to an exchange on server B
Alas, in both these cases using my usual techniques, by the time I get to sending to server B the connection throws ConnectionClosed or ChannelClosed.
I assume this is the cause: while waiting on the incoming messages, the connection to server B (its "driver") is starved of CPU cycles, and it never gets a chance to service is connection socket, thus it can't respond to heartbeats from server B, thus the servers shuts down the connection.
But I can't noodle out the fix. My current work around is lame: I catch the ConnectionClosed, reopen a connection to server B, and retry sending my message.
But what is the "right" way to do this? I've considered these, but don't really feel I have all the parts to solve this:
Don't just sit forever in server A's basic_consume (my usual pattern), but rather, use a timeout, and when I catch the timeout somehow "service" heartbeats on server B's driver, before returning to a "consume with timeout"... but how do I do that? How do I "let service B's connection driver service its heartbeats"?
I know the socket library's select() call can wait for messages on several sockets and once, then service the socket who has packets waiting. So maybe this is what pika's SelectConnection is for? a) I'm not sure, this is just a hunch. b) Even if right, while I can find examples of how to create this connection, I can't find examples of how to use it to solve my multiconnection case.
Set up the the two server connections in different processes... and use Python interprocess queues to get the processed message from one process to the next. The concept is "two different RabbitMQ connections in two different processes should thus then be able to independently service their heartbeats". Except... I think this has a fatal flaw: the process with "server B" is, instead, going to be "stuck" waiting on the interprocess queue, and the same "starvation" is going to happen.
I've checked StackOverflow and Googled this for an hour last night: I can't for the life of me find a blog post or sample code for this.
Any input? Thanks a million!
I managed to work it out, basing my solution on the documentation and an answer in the pika-python Google group.
First of all, your assumption is correct — the client process that's connected to server B, responsible for publishing, cannot reply to heartbeats if it's already blocking on something else, like waiting a message from server A or blocking on an internal communication queue.
The crux of the solution is that the publisher should run as a separate thread and use BlockingConnection.process_data_events to service heartbeats and such. It looks like that method is supposed to be called in a loop that checks if the publisher still needs to run:
def run(self):
while self.is_running:
# Block at most 1 second before returning and re-checking
self.connection.process_data_events(time_limit=1)
Proof of concept
Since proving the full solution requires having two separate RabbitMQ instances running, I have put together a Git repo with an appropriate docker-compose.yml, the application code and comments to test this solution.
https://github.com/karls/rabbitmq-two-connections
Solution outline
Below is a sketch of the solution, minus imports and such. Some notable things:
Publisher runs as a separate thread
The only "work" that the publisher does is servicing heartbeats and such, via Connection.process_data_events
The publisher registers a callback whenever the consumer wants to publish a message, using Connection.add_callback_threadsafe
The consumer takes the publisher as a constructor argument so it can publish the messages it receives, but it can work via any other mechanism as long as you have a reference to an instance of Publisher
The code is taken from the linked Git repo, which is why certain details are hardcoded, e.g the queue name etc. It will work with any RabbitMQ setup needed (direct-to-queue, topic exchange, fanout, etc).
class Publisher(threading.Thread):
def __init__(
self,
connection_params: ConnectionParameters,
*args,
**kwargs,
):
super().__init__(*args, **kwargs)
self.daemon = True
self.is_running = True
self.name = "Publisher"
self.queue = "downstream_queue"
self.connection = BlockingConnection(connection_params)
self.channel = self.connection.channel()
self.channel.queue_declare(queue=self.queue, auto_delete=True)
self.channel.confirm_delivery()
def run(self):
while self.is_running:
self.connection.process_data_events(time_limit=1)
def _publish(self, message):
logger.info("Calling '_publish'")
self.channel.basic_publish("", self.queue, body=message.encode())
def publish(self, message):
logger.info("Calling 'publish'")
self.connection.add_callback_threadsafe(lambda: self._publish(message))
def stop(self):
logger.info("Stopping...")
self.is_running = False
# Call .process_data_events one more time to block
# and allow the while-loop in .run() to break.
# Otherwise the connection might be closed too early.
#
self.connection.process_data_events(time_limit=1)
if self.connection.is_open:
self.connection.close()
logger.info("Connection closed")
logger.info("Stopped")
class Consumer:
def __init__(
self,
connection_params: ConnectionParameters,
publisher: Optional["Publisher"] = None,
):
self.publisher = publisher
self.queue = "upstream_queue"
self.connection = BlockingConnection(connection_params)
self.channel = self.connection.channel()
self.channel.queue_declare(queue=self.queue, auto_delete=True)
self.channel.basic_qos(prefetch_count=1)
def start(self):
self.channel.basic_consume(
queue=self.queue, on_message_callback=self.on_message
)
try:
self.channel.start_consuming()
except KeyboardInterrupt:
logger.info("Warm shutdown requested...")
except Exception:
traceback.print_exception(*sys.exc_info())
finally:
self.stop()
def on_message(self, _channel: Channel, m, _properties, body):
try:
message = body.decode()
logger.info(f"Got: {message!r}")
if self.publisher:
self.publisher.publish(message)
else:
logger.info(f"No publisher provided, printing message: {message!r}")
self.channel.basic_ack(delivery_tag=m.delivery_tag)
except Exception:
traceback.print_exception(*sys.exc_info())
self.channel.basic_nack(delivery_tag=m.delivery_tag, requeue=False)
def stop(self):
logger.info("Stopping consuming...")
if self.connection.is_open:
logger.info("Closing connection...")
self.connection.close()
if self.publisher:
self.publisher.stop()
logger.info("Stopped")

Twisted: Using connectProtocol to connect endpoint cause memory leak?

I was trying to build a server. Beside accept connection from clients as normal servers do, my server will connect other server as a client either.
I've set the protocol and endpoint like below:
p = FooProtocol()
client = TCP4ClientEndpoint(reactor, '127.0.0.1' , 8080) # without ClientFactory
Then, after call reactor.run(), the server will listen/accept new socket connections. when new socket connections are made(in connectionMade), the server will call connectProtocol(client, p), which acts like the pseudocode below:
while server accept new socket:
connectProtocol(client, p)
# client.client.connect(foo_client_factory) --> connecting in this way won't
# cause memory leak
As the connections to the client are made, the memory is gradually consumed(explicitly calling gc doesn't work).
Do I use the Twisted in a wrong way?
-----UPDATE-----
My test programe: Server waits clients to connect. When connection from client is made, server will create 50 connections to other server
Here is the code:
#! /usr/bin/env python
import sys
import gc
from twisted.internet import protocol, reactor, defer, endpoints
from twisted.internet.endpoints import TCP4ClientEndpoint, connectProtocol
class MyClientProtocol(protocol.Protocol):
def connectionMade(self):
self.transport.loseConnection()
class MyClientFactory(protocol.ClientFactory):
def buildProtocol(self, addr):
p = MyClientProtocol()
return p
class ServerFactory(protocol.Factory):
def buildProtocol(self, addr):
p = ServerProtocol()
return p
client_factory = MyClientFactory() # global
client_endpoint = TCP4ClientEndpoint(reactor, '127.0.0.1' , 8080) # global
times = 0
class ServerProtocol(protocol.Protocol):
def connectionMade(self):
global client_factory
global client_endpoint
global times
for i in range(50):
# 1)
p = MyClientProtocol()
connectProtocol(client_endpoint, p) # cause memleak
# 2)
#client_endpoint.connect(client_factory) # no memleak
times += 1
if times % 10 == 9:
print 'gc'
gc.collect() # doesn't work
self.transport.loseConnection()
if __name__ == '__main__':
server_factory = ServerFactory()
serverEndpoint = endpoints.serverFromString(reactor, "tcp:8888")
serverEndpoint.listen(server_factory)
reactor.run()
This program doesn't do any Twisted log initialization. This means it runs with the "log beginner" for its entire run. The log beginner records all log events it observes in a LimitedHistoryLogObserver (up to a configurable maximum).
The log beginner keeps 2 ** 16 (_DEFAULT_BUFFER_MAXIMUM) events and then begins throwing out old ones, presumably to avoid consuming all available memory if a program never configures another observer.
If you hack the Twisted source to set _DEFAULT_BUFFER_MAXIMUM to a smaller value - eg, 10 - then the program no longer "leaks". Of course, it's really just an object leak and not a memory leak and it's bounded by the 2 ** 16 limit Twisted imposes.
However, connectProtocol creates a new factory each time it is called. When each new factory is created, it logs a message. And the application code generates a new Logger for each log message. And the logging code puts the new Logger into the log message. This means the memory cost of keeping those log messages around is quite noticable (compared to just leaking a short blob of text or even a dict with a few simple objects in it).
I'd say the code in Twisted is behaving just as intended ... but perhaps someone didn't think through the consequences of that behavior complete.
And, of course, if you configure your own log observer then the "log beginner" is taken out of the picture and there is no problem. It does seem reasonable to expect that all serious programs will enable logging rather quickly and avoid this issue. However, lots of short throw-away or example programs often don't ever initialize logging and rely on print instead, making them subject to this behavior.
Note This problem was reported in #8164 and fixed in 4acde626 so Twisted 17 will not have this behavior.

ZeroMQ: HWM on PUSH does not work

I am trying to write a server/client script with a server that vents the tasks, and multiple workers that execute it.
The problem is that my ventilator has so many tasks that it would fill up the memory in a heartbeat.
I tried to set the HWM before it binds, but with no success. It just keeps on sending messages as soon as a worker connects, completely disregarding the HWM that was set. I also have a sink that keeps record of the tasks that were done.
server.py
import zmq
def ventilate():
context = zmq.Context()
# Socket to send messages on
sender = context.socket(zmq.PUSH)
sender.setsockopt(zmq.SNDHWM, 30) #Big messages, so I don't want to keep too many in queue
sender.bind("tcp://*:5557")
# Socket with direct access to the sink: used to syncronize start of batch
sink = context.socket(zmq.PUSH)
sink.connect("tcp://localhost:5558")
print "Sending tasks to workers…"
# The first message is "0" and signals start of batch
sink.send('0')
print "Sent starting signal"
while True:
sender.send("Message")
if __name__=="__main__":
ventilate()
worker.py
import zmq
from multiprocessing import Process
def work():
context = zmq.Context()
# Socket to receive messages on
receiver = context.socket(zmq.PULL)
receiver.connect("tcp://localhost:5557")
# Socket to send messages to
sender = context.socket(zmq.PUSH)
sender.connect("tcp://localhost:5558")
# Process t asks forever
while True:
msg = receiver.recv_msg()
print "Doing sth with msg %s"%(msg)
sender.send("Message %s done"%(msg))
if __name__ == "__main__":
for worker in range(10):
Process(target=work).start()
sink.py
import zmq
def sink():
context = zmq.Context()
# Socket to receive messages on
receiver = context.socket(zmq.PULL)
receiver.bind("tcp://*:5558")
# Wait for start of batch
s = receiver.recv()
print "Received start signal"
while True:
msg = receiver.recv_msg()
print msg
if __name__=="__main__":
sink()
Ok, I had a play around, I don't think the issue is with the PUSH HWM, but rather that you can't set a HWM for PULL. If you look at this documentation, you can see there it says N/A for action on HWM.
The PULL sockets seem to be taking hundreds of messages each (and I did try setting a HWM just in case it did anything on the PULL socket. It didn't.). I evidenced this by changing the ventilator to send messages with an incrementing integer, and changing each worker in the pool to wait 2 seconds between calls to recv(). The workers print out that they are processing messages with vastly different integers. For instance, one worker will be working on message 10, while the next is working on message 400. As time goes on, you see the worker who was processing message 10, is now processing message 11, 12, 13, etc. while the other is processing 401, 402, etc.
This indicates to me that the ZMQ_PULL socket is buffering the messages somewhere. So while the ZMQ_PUSH socket does have a HWM, the PULL socket is requesting messages quickly, despite them not actually being accessed by a call to recv(). So that results in the PUSH HWM effectively being ignored if a PULL socket is connected. As far as I can see, you can't control the length of the buffer of the PULL socket (I would expect the RCVHWM socket option to control this but it doesn't appear to).
This behaviour of course begs the question what is the point of the ZMQ_PULL HWM option, which only makes sense to have if you can also control the receiving sockets HWM.
At this point, I'd start asking the 0MQ people whether you are missing something obvious, or if this is considered a bug.
Sorry I couldn't be more help!
ZeroMQ has buffers on both sending and receiving ends of a socket, hence you need to set high water marks on both the PUSH and the PULL socket in your code (and indeed before a bind() or connect()).
In the Python bindings this is now conveniently done via socket.hwm = 1 which will set both ZMQ_SNDHWM and ZMQ_RCVHWM in one go.

Testing using inverted zeromq pub-sub in python

I did use pyzmq 2.2.0.1 (python27 on Windows or Linux) in my code and when I running this it works (also it python threads):
def test_zmq_inverted_pub_sub():
import zmq
import time
ctx = zmq.Context()
sub = ctx.socket(zmq.SUB)
pub = ctx.socket(zmq.PUB)
sub.bind('tcp://127.0.0.1:5555')
sub.setsockopt(zmq.SUBSCRIBE, b'')
time.sleep(3)
pub.connect('tcp://127.0.0.1:5555')
pub.send(b'0')
assert sub.poll(3)
When I'd upgrade my zmq to 13.1.0 (and now to 14.0.0) I see this test doesn't work.
I tried searching some changes about it but I didn't find.
When I creating this queues on different processes it's work but I don't want to open new process for my test. is there any explanation why it's doesn't work and how can I do this test right?
Thanks.
This is mainly because subscriptions are filtered PUB-side, starting with zeromq 3.0. It takes a finite time for subscriptions to propagate, so the fact that you are trying to send immediately after you establish the connection means that you are probably sending before the PUB socket knows that it has any subscribers.
There is a secondary issue that is a known bug,
specific to when SUB binds and PUB connects. The result is that the SUB socket does not tell the PUB about its subscriptions until the first time it polls / recvs after the connection has been established.
This version of the test will pass:
def test_zmq_inverted_pub_sub():
import zmq
import time
ctx = zmq.Context()
sub = ctx.socket(zmq.SUB)
pub = ctx.socket(zmq.PUB)
sub.bind('tcp://127.0.0.1:5555')
sub.setsockopt(zmq.SUBSCRIBE, b'')
pub.connect('tcp://127.0.0.1:5555')
# the first sub.poll is a workaround to force subscription propagation
for i in range(2):
pub.send(b'hi')
evt = sub.poll(1)
if evt:
break
assert evt

Categories