I want to introduce a zmq based logging into a Python program. As I was facing ZMQError: Address in use errors I decided to boil it down to a simple proof of concept. I was able to run the boiled down version, but not to receive any log entries. This is the code I used:
Log Publisher:
import time
import logging
from zmq.log import handlers as zmqHandler
logger = logging.getLogger('myapp')
logger.setLevel(logging.ERROR)
zmqH=zmqHandler.PUBHandler('tcp://127.0.0.1:12344')
logger.addHandler(zmqH)
for i in range(50):
logger.error('error test...')
print "Send error #%s" % (str(i))
time.sleep(1)
Result
Send error #0
Send error #1
Send error #2
Send error #3
Send error #4
...
Log Subscriber:
import time
import zmq
def sub_client():
port = "12344"
context = zmq.Context()
socket = context.socket(zmq.SUB)
socket.connect("tcp://127.0.0.1:%s" % port)
# Generate 30 entries
for i in range (30):
print "Listening to publishers..."
message = socket.recv()
print "Received error #%s: %s" % (str(i), message)
time.sleep(1)
sub_client()
Result
Listening to publishers...
So the subscriber is locked at the call of socket.recv(). I started publisher and subscriber in different consoles. Both processes appear when I use netstat:
C:\>netstat -a -n -o | findstr 12344
TCP 127.0.0.1:12344 0.0.0.0:0 LISTEN 1336
TCP 127.0.0.1:12344 127.0.0.1:51937 ESTABLISHED 1336
TCP 127.0.0.1:51937 127.0.0.1:12344 ESTABLISHED 8624
I fail to see my mistake here, any ideas?
In addition to the problem at hand, how do I use this zmq listener in general.
Do I have to create one instance of the PUBHandler per process and then add it to all instances of logger (logging.getLogger('myapp') creates a own logger instance, right?) or do I have to create an own PUBHandler for all of the different classes I use? As the PUBHandlerclass has a createLock() I assume that it is not thread save...
For completness I want to mention the doc of the PUBHandler class
I am using a python(x,y) distribution at Win7 with python 2.7.10 and pyzmq 14.7.0-14
[update]
I ruled out the the windows firewall as the source of the missing packages
The problem is on the subscriber side. Initially a subscriber filters out all messages, until a filter is set. Use socket.setsockopt(opt, value) function to archive this.
The pyZMQ description is not very clear about the use of this function:
getsockopt(opt) get default socket options for new sockets created by
this Context
But the documentation of the zmq_setsockopt function is quite clear (see here):
int zmq_setsockopt (void *socket, int option_name, const void *option_value, size_t option_len)
...
ZMQ_SUBSCRIBE: Establish message filter The ZMQ_SUBSCRIBE option shall establish a new message filter on a ZMQ_SUB socket.
Newly created ZMQ_SUB sockets shall filter out all incoming
messages, therefore you should call this option to establish an
initial message filter.
So the solution is to set a filter with socket.setsockopt(zmq.SUBSCRIBE,filter), where filter is the string you want to filter for. Use filter='' to show all messages. A filter like filter='ERROR' will only display the error messages and suppress all other types like WARNING,INFO or DEBUG.
With this the sub_client() function looks like this:
import time
import zmq
def sub_client():
port = "12344"
context = zmq.Context()
socket = context.socket(zmq.SUB)
socket.connect("tcp://127.0.0.1:%s" % port)
socket.setsockopt(zmq.SUBSCRIBE,'')
# Process 30 updates
print "Listening to publishers..."
for i in range (30):
print "Listening to publishers..."
message = socket.recv()
print "Received error #%s: %s" % (str(i), message)
time.sleep(1)
sub_client()
I know it is an older post, but if someone lands here this is what the subscriber looks like
def sub_client():
port = "12345"
context = zmq.Context()
socket = context.socket(zmq.SUB)
socket.connect("tcp://localhost:%s" % port)
socket.subscribe("")
# Process 30 updates
for i in range (30):
print("Listening to publishers...")
message = socket.recv()
print("Received error #%s: %s",str(i), message)
time.sleep(1)
sub_client()
And Publisher looks like
import zmq
import logging
import time
from zmq.log.handlers import PUBHandler
port = "12345"
context = zmq.Context()
pub = context.socket(zmq.PUB)
pub.bind("tcp://*:%s" % port)
handler = PUBHandler(pub)
logger = logging.getLogger()
logger.setLevel(logging.ERROR)
logger.addHandler(handler)
for i in range(50):
logger.error('error test...')
print("publish error",str(i))
time.sleep(1)
I guess you missed to set PUB socket in server.
something like this should do,
publisher = context.socket(zmq.PUB)
publisher.bind('tcp://127.0.0.1:12344')
zmqh = PUBHandler(publisher)
logger = logging.getLogger('myapp')
logger.setLevel(logging.ERROR)
logger.addHandler(zmqh)
I hope this helps.
Related
I'm programming a broker on my Raspberry Pi with Python and I have the sub.js and pub.js programmed in JavaScript. The pub and the sub are tested and work correctly, but the broker code doesn't react.
PUSH.js
const{PUSH}= require('./config');
var zmq = require("zeromq"),
sock = zmq.socket("push");
sock.connect("tcp://"+PUSH);
console.log("PUSH connected to port 24041 from the broker");
setInterval(function() {
console.log("PUSH sending to broker");
sock.send("example");//send to broker
}, 500);
SUB.js
const{sub}= require('./config');
var zmq = require("zeromq"),
sock = zmq.socket("sub");
sock.connect("tcp://"+ sub);
console.log("sub connected to port 24042 from the broker");
sock.subscribe(""); //term what sub send
sock.on("message", function (msg) {
//Broker received what pub send
console.log("Broker has received: %s", msg.toString());
});
Output from working broker sending "example"
Output from raspberry pi for PUSH.js and PUB.js
With the broker code I want receive and print from PUSH.js (port 41) and send back to SUB.js (port 42).
import zmq
import time
def main():
"""main method"""
# Prepare our context and publisher
context = zmq.Context()
publisher = context.socket(zmq.PUSH)
subscriber = context.socket(zmq.SUB)
publisher.connect("tcp://address:42")
subscriber.connect("tcp://address:41")
subscriber.setsockopt(zmq.SUBSCRIBE, b"example")
while True:
#print received contents from port 41
[address, contents] = subscriber.recv_multipart()
print("[%s] %s" % (address, contents))
#send received contents from port 42
publisher.send_multipart([address, contents])
# We never get here but clean up anyhow
publisher.close()
subscriber.close()
context.term()
if __name__ == "__main__":
main()
output broker:
nothing print
make sure the port numbers do match -- 41 != 24041 / 42 != 24042
make sure each link has at least one .bind()-node and all others .connect() to that one address/port, so as to make both the PUB/SUB and PUSH/PULL archetypes indeed work together
make sure to follow the ZeroMQ published API & avoid incompatible setups, like trying to bond PUSH/PUSH.
I have created a server that has a socket bound to 2 ports. Now there are 2 clients each of which is connected to 1 port. The server receives random numbers from both clients, but I want the server to know from which port it is receiving the data.
I know it maybe a pretty stupid question and I am new to this. Please help !
Server
import zmq
context = zmq.Context()
socket = context.socket(zmq.REP)
socket.bind("tcp://*:5555") # A single port is used for the communication
socket.bind("tcp://*:5556") # A single port is used for the communication
# Wait for next request from client
while True:
try:
message = socket.recv()
msg = message.decode("utf-8") # decode the bytes to string
print("Random Integer Received: %s" % msg) # display received string
# Send reply back to client
socket.send_string("Random Integer Received from client 1")
#sock_add = socket.getaddrinfo()
except zmq.ZMQError as e:
print('Unable to receive: %s', e)
Client 1 (client 2 has identical code with port address 5556)
import zmq
import time
import numpy as np
#import pandas as pd
context = zmq.Context()
num = np.random.randint(150) # Generates a random integer
# Socket to talk to server
print("Connecting to server…")
socket = context.socket(zmq.REQ)
socket.connect("tcp://localhost:5555")
# Do 100 requests, waiting each time for a response
for request in range(100):
k = np.random.randint(150) # Generates a random integer
print("Sending request %s …" % request, "sent no is %s" % k)
socket.send_string(str(k))
# Get the reply.
message = socket.recv()
print("Received reply %s [ %s ]" % (request, message))
time.sleep(5) # delay of 5 seconds
Please help !!
I would simply add the information that you need (port) into the message itself and use the zmq.SNDMORE...
Something like:
socket.send_string("Port 5555", flags=zmq.SNDMORE)
socket.send_string(str(k)) # or 5556
And then, on the server side, you can see which port the messages is coming from based on the message itself.
I started using zeromq with python with the Publisher/Subscriber reference. However, I don't find any documentation about how to treat messages in the queue. I want to treat the last received message different as the rest of the elements of the queue.
Example
publisher.py
import zmq
import random
import time
port = "5556"
topic = "1"
context = zmq.Context()
socket = context.socket(zmq.PUB)
socket.bind("tcp://*:%s" % port)
while True:
messagedata = random.randrange(1,215)
print "%s %d" % (topic, messagedata)
socket.send("%s %d" % (topic, messagedata))
time.sleep(.2)
subscriber.py
import zmq
port = "5556"
topic = "1"
context = zmq.Context()
socket = context.socket(zmq.SUB)
print "Connecting..."
socket.connect ("tcp://localhost:%s" % port)
socket.setsockopt(zmq.SUBSCRIBE,topic)
while True:
if isLastMessage(): # probably based on socket.recv()
analysis_function() # time consuming function
else:
simple_function() # something simple like print and save in memory
I just want to know how to create the isLastMessage() function described in the subscriber.py file. If there's something directly in zeromq or a workaround.
Welcome to the world of non-blocking messaging / signalling
this is a cardinal feature for any serious distributed-system design.
If you assume a "last" message via a not having another one in the pipe, then a Poller() instance may help your main event-loops, where you may control the amount of time to "wait"-a-bit before considering the pipe "empty", not to devastate your IO-resources with zero-wait spinning-loops.
Explicit signalling is always better ( if you can design the remote end behaviour )
There is Zero-knowledge on the receiver-side, what is the context of the "last"-message received ( and explicit signalling is advised to be rather broadcast from the message sender-side ), however there is a reversed feature to this -- that instructs ZeroMQ archetypes to "internally"-throw away all such messages, that are not the "last"-message, thus reducing the receiver-side processing to right the "last"-message available.
aQuoteStreamMESSAGE.setsockopt( zmq.CONFLATE, 1 )
If you may like to read more on ZeroMQ patterns and anti-patterns, do not miss Pieter HINTJENS' fabulous book "Code Connected, Volume 1" ( also in pdf ) and may like a broader view on distributed-computing using principally a non-blocking ZeroMQ approach
If isLastMessage() is meant to identify the last message within the stream of messages produced by publisher.py, than this is impossible since there is no last message. publisher.py produces an infinite amount of messages!
However, if publisher.py knows its last "real" message, i.e. no while True:, it could send a "I am done" message afterwards. Identifying that in subscriber.py is trivial.
Sorry, I will keep the question for reference. I just found the answer, in the documentation there is a NOBLOCK flag that you can add to the receiver. With this the recv command doesn't block. A simple workaround, extracted from a part of an answer, is the following:
while True:
try:
#check for a message, this will not block
message = socket.recv(flags=zmq.NOBLOCK)
#a message has been received
print "Message received:", message
except zmq.Again as e:
print "No message received yet"
As for the real implementation, one is not sure that it is the last call you use the flag NOBLOCK and once you have entered the exception block. Wich translates to something like the following:
msg = subscribe(in_socket)
is_last = False
while True:
if is_last:
msg = subscribe(in_socket)
is_last = False
else:
try:
old_msg = msg
msg = subscribe(in_socket,flags=zmq.NOBLOCK)
# if new message was received, then process the old message
process_not_last(old_msg)
except zmq.Again as e:
process_last(msg)
is_last = True # it is probably the last message
I have an application where every websocket connection (within tornado open callback) creates a zmq.SUB socket to an existing zmq.FORWARDER device. Idea is to receive data from zmq as callbacks, which can then be relayed to frontend clients over websocket connection.
https://gist.github.com/abhinavsingh/6378134
ws.py
import zmq
from zmq.eventloop import ioloop
from zmq.eventloop.zmqstream import ZMQStream
ioloop.install()
from tornado.websocket import WebSocketHandler
from tornado.web import Application
from tornado.ioloop import IOLoop
ioloop = IOLoop.instance()
class ZMQPubSub(object):
def __init__(self, callback):
self.callback = callback
def connect(self):
self.context = zmq.Context()
self.socket = self.context.socket(zmq.SUB)
self.socket.connect('tcp://127.0.0.1:5560')
self.stream = ZMQStream(self.socket)
self.stream.on_recv(self.callback)
def subscribe(self, channel_id):
self.socket.setsockopt(zmq.SUBSCRIBE, channel_id)
class MyWebSocket(WebSocketHandler):
def open(self):
self.pubsub = ZMQPubSub(self.on_data)
self.pubsub.connect()
self.pubsub.subscribe("session_id")
print 'ws opened'
def on_message(self, message):
print message
def on_close(self):
print 'ws closed'
def on_data(self, data):
print data
def main():
application = Application([(r'/channel', MyWebSocket)])
application.listen(10001)
print 'starting ws on port 10001'
ioloop.start()
if __name__ == '__main__':
main()
forwarder.py
import zmq
def main():
try:
context = zmq.Context(1)
frontend = context.socket(zmq.SUB)
frontend.bind('tcp://*:5559')
frontend.setsockopt(zmq.SUBSCRIBE, '')
backend = context.socket(zmq.PUB)
backend.bind('tcp://*:5560')
print 'starting zmq forwarder'
zmq.device(zmq.FORWARDER, frontend, backend)
except KeyboardInterrupt:
pass
except Exception as e:
logger.exception(e)
finally:
frontend.close()
backend.close()
context.term()
if __name__ == '__main__':
main()
publish.py
import zmq
if __name__ == '__main__':
context = zmq.Context()
socket = context.socket(zmq.PUB)
socket.connect('tcp://127.0.0.1:5559')
socket.send('session_id helloworld')
print 'sent data for channel session_id'
However, my ZMQPubSub class doesn't seem like is receiving any data at all.
I further experimented and realized that I need to call ioloop.IOLoop.instance().start() after registering on_recv callback within ZMQPubSub. But, that will just block the execution.
I also tried passing main.ioloop instance to ZMQStream constructor but doesn't help either.
Is there a way by which I can bind ZMQStream to existing main.ioloop instance without blocking flow within MyWebSocket.open?
In your now complete example, simply change frontend in your forwarder to a PULL socket and your publisher socket to PUSH, and it should behave as you expect.
The general principles of socket choice that are relevant here:
use PUB/SUB when you want to send a message to everyone who is ready to receive it (may be no one)
use PUSH/PULL when you want to send a message to exactly one peer, waiting for them to be ready
it may appear initially that you just want PUB-SUB, but once you start looking at each socket pair, you realize that they are very different. The frontend-websocket connection is definitely PUB-SUB - you may have zero-to-many receivers, and you just want to send messages to everyone who happens to be available when a message comes through. But the backend side is different - there is only one receiver, and it definitely wants every message from the publishers.
So there you have it - backend should be PULL and frontend PUB. All your sockets:
PUSH -> [PULL-PUB] -> SUB
publisher.py: socket is PUSH, connected to backend in device.py
forwarder.py: backend is PULL, frontend is PUB
ws.py: SUB connects and subscribes to forwarder.frontend.
The relevant behavior that makes PUB/SUB fail on the backend in your case is the slow joiner syndrome, which is described in The Guide. Essentially, subscribers take a finite time to tell publishers about there subscriptions, so if you send a message immediately after opening a PUB socket, the odds are it hasn't been told that it has any subscribers yet, so it's just discarding messages.
ZeroMq subscribers have to subscribe on what messages they wish to receive; I don't see that in your code. I believe the Python way is this:
self.socket.setsockopt(zmq.SUBSCRIBE, "")
What is a good way to communicate between two separate Python runtimes? Thing's I've tried:
reading/writing on named pipes e.g. os.mkfifo (feels hacky)
dbus services (worked on desktop, but too heavyweight for headless)
sockets (seems too low-level; surely there's a higher level module to use?)
My basic requirement is to be able to run python listen.py like a daemon, able to receive messages from python client.py. The client should just send a message to the existing process and terminate, with return code 0 for success and nonzero for failure (i.e. a two-way communication will be required.)
The multiprocessing library provides listeners and clients that wrap sockets and allow you to pass arbitrary python objects.
Your server could listen to receive python objects:
from multiprocessing.connection import Listener
address = ('localhost', 6000) # family is deduced to be 'AF_INET'
listener = Listener(address, authkey=b'secret password')
conn = listener.accept()
print 'connection accepted from', listener.last_accepted
while True:
msg = conn.recv()
# do something with msg
if msg == 'close':
conn.close()
break
listener.close()
Your client could send commands as objects:
from multiprocessing.connection import Client
address = ('localhost', 6000)
conn = Client(address, authkey=b'secret password')
conn.send('close')
# can also send arbitrary objects:
# conn.send(['a', 2.5, None, int, sum])
conn.close()
Nah, zeromq is the way to go. Delicious, isn't it?
import argparse
import zmq
parser = argparse.ArgumentParser(description='zeromq server/client')
parser.add_argument('--bar')
args = parser.parse_args()
if args.bar:
# client
context = zmq.Context()
socket = context.socket(zmq.REQ)
socket.connect('tcp://127.0.0.1:5555')
socket.send(args.bar)
msg = socket.recv()
print msg
else:
# server
context = zmq.Context()
socket = context.socket(zmq.REP)
socket.bind('tcp://127.0.0.1:5555')
while True:
msg = socket.recv()
if msg == 'zeromq':
socket.send('ah ha!')
else:
socket.send('...nah')
Based on #vsekhar's answer, here is a Python 3 version with more details and multiple connections:
Server
from multiprocessing.connection import Listener
listener = Listener(('localhost', 6000), authkey=b'secret password')
running = True
while running:
conn = listener.accept()
print('connection accepted from', listener.last_accepted)
while True:
msg = conn.recv()
print(msg)
if msg == 'close connection':
conn.close()
break
if msg == 'close server':
conn.close()
running = False
break
listener.close()
Client
from multiprocessing.connection import Client
import time
# Client 1
conn = Client(('localhost', 6000), authkey=b'secret password')
conn.send('foo')
time.sleep(1)
conn.send('close connection')
conn.close()
time.sleep(1)
# Client 2
conn = Client(('localhost', 6000), authkey=b'secret password')
conn.send('bar')
conn.send('close server')
conn.close()
From my experience, rpyc is by far the simplest and most elegant way to go about it.
I would use sockets; local communication was strongly optimized, so you shouldn't have performance problems and it gives you the ability to distribute your application to different physical nodes if the needs should arise.
With regard to the "low-level" approach, you're right. But you can always use an higher-level wrapper depending on your needs. XMLRPC could be a good candidate, but it is maybe overkill for the task you're trying to perform.
Twisted offers some good protocol simple implementations, such as LineReceiver (for simple line based messages) or the more elegant AMP (which was, by the way, standardized and implemented in different languages).
Check out a cross-platform library/server called RabbitMQ. Might be too heavy for two-process communication, but if you need multi-process or multi-codebase communication (with various different means, e.g. one-to-many, queues, etc), it is a good option.
Requirements:
$ pip install pika
$ pip install bson # for sending binary content
$ sudo apt-get rabbitmq-server # ubuntu, see rabbitmq installation instructions for other platforms
Publisher (sends data):
import pika, time, bson, os
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
channel.exchange_declare(exchange='logs', type='fanout')
i = 0
while True:
data = {'msg': 'Hello %s' % i, b'data': os.urandom(2), 'some': bytes(bytearray(b'\x00\x0F\x98\x24'))}
channel.basic_publish(exchange='logs', routing_key='', body=bson.dumps(data))
print("Sent", data)
i = i + 1
time.sleep(1)
connection.close()
Subscriber (receives data, can be multiple):
import pika, bson
connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
channel = connection.channel()
channel.exchange_declare(exchange='logs', type='fanout')
result = channel.queue_declare(exclusive=True)
queue_name = result.method.queue
channel.queue_bind(exchange='logs', queue=queue_name)
def callback(ch, method, properties, body):
data = bson.loads(body)
print("Received", data)
channel.basic_consume(callback, queue=queue_name, no_ack=True)
channel.start_consuming()
Examples based on https://www.rabbitmq.com/tutorials/tutorial-two-python.html
I would use sockets, but use Twisted to give you some abstraction, and to make things easy. Their Simple Echo Client / Server example is a good place to start.
You would just have to combine the files and instantiate and run either the client or server depending on the passed argument(s).