Send message to multiple servers pyzmq - python

If I have one client connect to multiple servers, and try to send a message,
socket = context.socket(zmq.REQ)
socket.connect ("tcp://127.0.0.1:5565")
socket.connect ("tcp://127.0.0.1:5566")
socket.connect ("tcp://127.0.0.1:5567")
socket.send("Hello all")
only one server will actually get the message. The documentation says that pyzmq preforms some simple load balancing across all available servers.
Is there a way send a message to all servers, rather than just one?
Background:
I am trying to control a network of raspberry pis with my computer. I need to send a message to all of them at once, but I can't use PUB/SUB model, because then they all need to respond to that message.
I have one requester (master computer) that sends a request to all of the repliers (raspberry pis), and they all reply individually. For example I could send one message asking to get the reading from a tempurature sensor, and I want all of the raspberry pis to get read a tempurature sensor and send it back.

Yes.
Use an appropriate Formal Communication Pattern.
ZMQ.REQ formalism indeed expects, that the component is asking some other process, via sending a REQUEST, to do some job in response to the message. Thus the multiple exgress targets the .connect() has built a transport relation with, are served in a round-robin mode, selecting one after another, in a fair-queue-policy mode. So the component works but for a different purpose, than you are asking it to do.
Solution
Try some more complex Formal Communication Pattern that "spreads" the message to all relevant peers ( PUB/SUB alike ) but more complex, smarter, fail-safe derived schemes, that would serve your Raspberry PI solution needs.
The greatest strength of the ZeroMQ is in that it off-loads the low-level details from you and leaves you an immense power in designing all the needed distributed scaleable Formal Communication Patterns, that you need. Forget about just the few primitives ( building blocks ) directly listed in the ZeroMQ binding. Think about your abstract message/event processing scheme and then assemble ZeroMQ elements to meet that scheme.
ZeroMQ [socket] is not a hose from A to B. It is rather an access port for dialogues with smart Formal Communication Pattern Nodes. You may benefit, that [socket] may work over many transport classes at the same time ... so your Formal Communication Patterns may span over L3-networks [TCP:] + go into [IPC:] and [INPROC:] process-to-process channels inside the [localhost].
All working in parallel ( well, sure - almost in parallel once inspected in lower detail )
All working in smooth co-integrated environment.
Where to source from?
A best next step you may do for this is IMHO to get a bit more global view, which may sound complicated for the first few things one tries to code with ZeroMQ, but if you at least jump to the page 265 of the Code Connected, Volume 1 [asPdf->], if it were not the case of reading step-by-step there.
The fastest-ever learning-curve would be to have first an un-exposed view on the Fig.60 Republishing Updates and Fig.62 HA Clone Server pair for a possible High-availability approach and then go back to the roots, elements and details.

Use PUB/SUB to send the request, and an entirely separate PUSH/PULL socket to get the answers back. The response message should probably include a field saying which Pi it has come from.

An alternate way is using PUSH/PULL instead of PUB/SUB, because with PUB/SUB method your message may be lost if a subscriber has not been executed, but in PUSH/PULL method when a Client/Sender post a message (PUSH), the Server/Getter can get it any time with the PULL attribute.
Here's a simple example:
Client side snippet code:
import zmq
def create_push_socket(ip, port):
print('PUB')
context = zmq.Context()
socket = context.socket(zmq.PUSH)
zmq_address = "tcp://{}:{}".format(ip, port)
socket.connect(zmq_address)
return socket
sock1 = create_push_socket('RPi-1-IP', RPi-1-PORT)
sock2 = create_push_socket('RPi-1-IP', RPi-1-PORT)
sock3 = create_push_socket('RPi-1-IP', RPi-1-PORT)
sock1.send('Hello')
sock2.send('Hello')
sock3.send('Hello')
Server side snippet code:
import zmq
def listen():
context = zmq.Context()
zmq_ = context.socket(zmq.PULL)
zmq_.bind('tcp://*:6667')
print(zmq_.recv())
listen()

I just used an array of req/rep pairs. Each client has multiple req sockets and each server has one rep socket. Is this not a scalable solution? The data being sent does not require high scalability. If it does be a problem, I could work something out with pub/sub.

Related

Efficient way to send results every 1-30 seconds from one machine to another

Key points:
I need to send roughly ~100 float numbers every 1-30 seconds from one machine to another.
The first machine is catching those values through sensors connected to it.
The second machine is listening for them, passing them to an http server (nginx), a telegram bot and another program sending emails with alerts.
How would you do this and why?
Please be accurate. It's the first time I work with sockets and with python, but I'm confident I can do this. Just give me crucial details, lighten me up!
Some small portion (a few rows) of the core would be appreciated if you think it's a delicate part, but the main goal of my question is to see the big picture.
Main thing here is to decide on a connection design and to choose protocol. I.e. will you have a persistent connection to your server or connect each time when new data is ready to it.
Then will you use HTTP POST or Web Sockets or ordinary sockets. Will you rely exclusively on nginx or your data catcher will be another serving service.
This would be a most secure way, if other people will be connecting to nginx to view sites etc.
Write or use another server to run on another port. For example, another nginx process just for that. Then use SSL (i.e. HTTPS) with basic authentication to prevent anyone else from abusing the connection.
Then on client side, make a packet every x seconds of all data (pickle.dumps() or json or something), then connect to your port with your credentials and pass the packet.
Python script may wait for it there.
Or you write a socket server from scratch in Python (not extra hard) to wait for your packets.
The caveat here is that you have to implement your protocol and security. But you gain some other benefits. Much more easier to maintain persistent connection if you desire or need to. I don't think it is necessary though and it can become bulky to code break recovery.
No, just wait on some port for a connection. Client must clearly identify itself (else you instantly drop the connection), it must prove that it talks your protocol and then send the data.
Use SSL sockets to do it so that you don't have to implement encryption yourself to preserve authentication data. You may even rely only upon in advance built keys for security and then pass only data.
Do not worry about the speed. Sockets are handled by OS and if you are on Unix-like system you may connect as many times you want in as little time interval you need. Nothing short of DoS attack won't inpact it much.
If on Windows, better use some finished server because Windows sometimes do not release a socket on time so you will be forced to wait or do some hackery to avoid this unfortunate behaviour (non blocking sockets and reuse addr and then some flo control will be needed).
As far as your data is small you don't have to worry much about the server protocol. I would use HTTPS myself, but I would write myown light-weight server in Python or modify and run one of examples from internet. That's me though.
The simplest thing that could possibly work would be to take your N floats, convert them to a binary message using struct.pack(), and then send them via a UDP socket to the target machine (if it's on a single LAN you could even use UDP multicast, then multiple receivers could get the data if needed). You can safely send a maximum of 60 to 170 double-precision floats in a single UDP datagram (depending on your network).
This requires no application protocol, is easily debugged at the network level using Wireshark, is efficient, and makes it trivial to implement other publishers or subscribers in any language.

Client-Server socket setup, how to respond to a determined client

This question has been edited to focus in a simpler problem
So I have a basic client-server socket installation, in which the client send a JSON like {'id': '1', 'value': 'A'}. At the server side, if I receive a message with id 2 I want to send a message to the client with id 1, telling him that his new value is C.
This message should be "private", i.e., only id 1 should receive it, no broadcasting allowed.
How should I approach this problem? How could I keep track of the connections at the server side so that I could send a message to a determined client? The problem is that it's the server the one sending the message to the client, not responding to a client's message. I guess it must be with some combination of threading and queues, but still haven't figured out how to do it.
This is the code I have right now at the server, keeping track of the clients using a dict, but it's not working (bad file descriptor at the sendall('C') line:
track_clients = {}
while True:
print "waiting for a connection"
connection, client_address = sock.accept()
try:
print "connection from ", client_address
data = json.loads(connection.recv(1024))
track_clients[data['id']] = connection
if data['id'] == '2':
conn = track_clients['1']
conn.sendall('C')
connection.sendall(json.dumps(data))
finally:
connection.close()
You can have a look at channels http://channels.readthedocs.org/en/latest/.
Alongside redis (https://pypi.python.org/pypi/redis/)
Have you considered using zeromq for this task?
It is easy to use and provides high level implementation of common patterns.
From zeromq guide
ZeroMQ (also known as ØMQ, 0MQ, or zmq) looks like an embeddable
networking library but acts like a concurrency framework. It gives you
sockets that carry atomic messages across various transports like
in-process, inter-process, TCP, and multicast. You can connect sockets
N-to-N with patterns like fan-out, pub-sub, task distribution, and
request-reply. It's fast enough to be the fabric for clustered
products. Its asynchronous I/O model gives you scalable multicore
applications, built as asynchronous message-processing tasks. It has a
score of language APIs and runs on most operating systems. ZeroMQ is
from iMatix and is LGPLv3 open source.
Also it seems like it better to reuse existing libraries because you can focus on your tasks directly while library provides you with all required high-level methods.
The code above is OK. The problem is the connection.close() at the finally statement. Removing it, fixes the issue.

Maintaining TCP Connection

I have a python program that reads in values from an ADC then writes them to a file as well as send them via TCP if a connection is available. I can send the data fine however as data is constantly being read I would like to be able to keep the connection open. How do I get the client to check that the Server has more data to send and thus to keep the connection open?
This scenario seems very similar to one of our applications. We use ZeroMQ for this.
Using ZeromMQ with PUB/SUB
On PyZMQ doc is example for using Pub/Sub.
Data provider creates PUB socket and sends messages to it.
Data consumer sets up SUB socket and reads messages from.
Typically, PUB socket is fixed part of infrastructure, so it binds to some port, and SUB connects. But if you like, you can switch it and it works too.
Advantages are:
provider sends messages to PUB socket and does not block on it
reconnects are handled automatically
if there is no consumer or connection, PUB socket silently drops the messages.
the code to implement this is very short
Other messaging patterns with ZeroMQ
PUB/SUB is just one option, there are other combinations like PUSH/PULL, REQ/REP etc.
I have some services running for years with PUSH/PULL and it stays "quite well" (typically there are 6 weeks before there is a need to do some restart, but this is rather due to problems on hardware than in ZeroMQ library.)

Python tcp socket client

I need to have a tcp socket client connected to a server to send data and receive.
But this socket must be always on and i cannot open another socket.
I have always some data to send over the time and then later process the answer to the data sent previously.
If i could open many sockets, i think it was more easy. But in my case i have to send everything on the same socket asynchronously.
So the question is, what do you recommend to use within the Python ecosystem? (twisted, tornado, etc)
Should i consider node.js or another option?
I highly recommend Twisted for this:
It comes with out-of-the-box support for many TCP protocols.
It is easy to maintain a single connection, there is a ReconnectingClientFactory that will deal with disconnections and use exponential backoff, and LoopingCall makes it easy to implement a heartbeat.
Stateful protocols are also easy to implement and intermingle with complex business logic.
It's fun.
I have a service that is exactly like the one you mention (single login, stays on all the time, processes data). It's been on for months working like a champ.
Twisted is possibly hard to get your head around, but the tutorials here are a great start. Knowing Twisted will get you far in the long run!
"i have to send everything on the same socket asynchronously"
Add your data to a queue, have a separate thread taking items out of the queue and sending via socket.send()

Python Socket Programming

I am developing a testbed for cloud computing environment. I want to establish multiple client connection to a server. What I want is that, server first of all send a data to all the clients specifying sending_interval and then all the clients will keep on sending their data with a time gap of that time_interval (as specified by the server). Please help me out, how can I do the same using python socket program. (i.e. I want multiple client to single server connectivity and also client sending data with the time gap specified by server). Will be great-full if anyone can help me. Thanks in advance.
This problem is easily solved by the ZeroMQ socket library. It is production stable. It allows you to define publisher-subscriber relationships, where a publishing process will publish data on a port regardless of how many (0 to infinite) listening processes there are. They call this the PUB-SUB model; it's in their docs (link below).
It sounds like you want to set up a bunch of clients that are all publishers. They can subscribe to a controlling channel, which which will send updates to their configuration (how often to write). They also act as publishers, pushing out their own data at an interval specified by default/config channel/socket.
Then, you have one or more listening processes that listen to all the clients' published messages. Perhaps you could even have two listening processes, one for backup or DR, or whatever.
We're using ZeroMQ and loving the simplicity it gives; there's no connection errors because the publisher doesn't care if anyone is listening, and the subscriber can start before the publisher and if there's nothing there to listen to, it can just loop around and wait until there is.
Bindings are available in ALL languages (it's freaky). The Python binding isn't pure-python, it does require a C compiler, but is frighteningly fast, and the pub/sub example is a cut/paste, 'golly, it works!' experience.
Link: http://zeromq.org
There are MANY other methods available with this library, including message queues, etc. They have relatively complete documentation, too.
Multi-Client and Single server Socket programming can be achieved by Multithreading in Socket Programming. I have implemented both the method:
Single Client and Single Server
Multiclient and Single Server
In my GitHub Repo Link: https://github.com/shauryauppal/Socket-Programming-Python
What is Multi-threading Socket Programming?
Multithreading is a process of executing multiple threads simultaneously in a single process.
To understand well you can visit Link: https://www.geeksforgeeks.org/socket-programming-multi-threading-python/, written by me.

Categories