Client-Server socket setup, how to respond to a determined client - python

This question has been edited to focus in a simpler problem
So I have a basic client-server socket installation, in which the client send a JSON like {'id': '1', 'value': 'A'}. At the server side, if I receive a message with id 2 I want to send a message to the client with id 1, telling him that his new value is C.
This message should be "private", i.e., only id 1 should receive it, no broadcasting allowed.
How should I approach this problem? How could I keep track of the connections at the server side so that I could send a message to a determined client? The problem is that it's the server the one sending the message to the client, not responding to a client's message. I guess it must be with some combination of threading and queues, but still haven't figured out how to do it.
This is the code I have right now at the server, keeping track of the clients using a dict, but it's not working (bad file descriptor at the sendall('C') line:
track_clients = {}
while True:
print "waiting for a connection"
connection, client_address = sock.accept()
try:
print "connection from ", client_address
data = json.loads(connection.recv(1024))
track_clients[data['id']] = connection
if data['id'] == '2':
conn = track_clients['1']
conn.sendall('C')
connection.sendall(json.dumps(data))
finally:
connection.close()

You can have a look at channels http://channels.readthedocs.org/en/latest/.
Alongside redis (https://pypi.python.org/pypi/redis/)

Have you considered using zeromq for this task?
It is easy to use and provides high level implementation of common patterns.
From zeromq guide
ZeroMQ (also known as ØMQ, 0MQ, or zmq) looks like an embeddable
networking library but acts like a concurrency framework. It gives you
sockets that carry atomic messages across various transports like
in-process, inter-process, TCP, and multicast. You can connect sockets
N-to-N with patterns like fan-out, pub-sub, task distribution, and
request-reply. It's fast enough to be the fabric for clustered
products. Its asynchronous I/O model gives you scalable multicore
applications, built as asynchronous message-processing tasks. It has a
score of language APIs and runs on most operating systems. ZeroMQ is
from iMatix and is LGPLv3 open source.
Also it seems like it better to reuse existing libraries because you can focus on your tasks directly while library provides you with all required high-level methods.

The code above is OK. The problem is the connection.close() at the finally statement. Removing it, fixes the issue.

Related

Python client-server - tell if client offline

My basic problem is that I am looking for a way for multiple clients to connect to a server over the internet, and for the server to be able to tell if those clients are online or offline.
My current way of doing this is a python socket server, and python clients, which send the server a small message every 2 seconds. The server checks each client to see if it has received such a message in the last 5 seconds, and if not, the client is marked as offline.
However, I feel that is is probably not the best way of doing this, and even if it is, there might be a library that does this for me. I have looked for such a library but have come up empty handed.
Does anyone know of a better way of doing this, or a library which can automatically check the status of multiple connected clients?
Note: by "offline", I mean that the client could be powered off, network connection disconnected or program quit.
Assuming you are not after ping from server to client. I believe that your approach is fine. Very ofther server will not be able to hit client but it works otherway around. You may run out of resources if you have many connected clients.
Also over this established channel you can send other data/metrics and boom monitoring was born ;-) IF you send other data you will probably reliaze you don't need to send data every 2 secs but only if no other data was sent - boom FIX works this way ( and many other messaging protocol)
What you may like is something like kafka that will transport the messages for you there are other messaging protocols too.. and they scale better then if you just connect all client(assuming you have many of them)
Happy messaging
Good Morning i am working a same project and i want to post my approach.
When a client is connected to my server with client, address = sock.accept() we can take his ip with ip_client = address[0]. Assuming you have a list with the connected ips you can append the ip with connected_clients.append(ip_client)
finally you have a list with the connected ips
make a Thread or inside an infinitive loop write the following code
for connected in connected_clients:
response = os.system("ping -c 1 " + connected)
if response == True:
continue
else:
connected_clients.remove(connected)
don't forget to the input os command on the beginning and you have made a beckon of connected clients

Threading an UDP server

I would like to make a multi-threading UDP server in Python.
The purpose is to be able to connect several clients to the server (not sockets connections but username and password), act with each of them and do some actions on the server. All at the same time.
I am a little confuse with all the different type of threading and I don't know what to use.
To be clearer this is exactly what I want to do at the same time :
Wait for clients to send data for the first time and register their ip in a database
Act with "connected" clients by waiting for them to send datagrams and respond to them
Be able to act with the server. For exemple, change a client's password in my database
I would have a look at a framework that is good at handling asynchronous io. The idea is to not have a thread per socket and block until you receive data, but instead let one thread handle many sockets at once. This scales well if you want your server to handle many clients.
For example:
Gevent - "a coroutine-based Python networking library", example
Twisted - "an event-driven networking engine", example
Eventlet - "a concurrent networking library", example (TCP, but it uses a patched socket so you can also refer to the Python wiki page about UDP Communication)

Send message to multiple servers pyzmq

If I have one client connect to multiple servers, and try to send a message,
socket = context.socket(zmq.REQ)
socket.connect ("tcp://127.0.0.1:5565")
socket.connect ("tcp://127.0.0.1:5566")
socket.connect ("tcp://127.0.0.1:5567")
socket.send("Hello all")
only one server will actually get the message. The documentation says that pyzmq preforms some simple load balancing across all available servers.
Is there a way send a message to all servers, rather than just one?
Background:
I am trying to control a network of raspberry pis with my computer. I need to send a message to all of them at once, but I can't use PUB/SUB model, because then they all need to respond to that message.
I have one requester (master computer) that sends a request to all of the repliers (raspberry pis), and they all reply individually. For example I could send one message asking to get the reading from a tempurature sensor, and I want all of the raspberry pis to get read a tempurature sensor and send it back.
Yes.
Use an appropriate Formal Communication Pattern.
ZMQ.REQ formalism indeed expects, that the component is asking some other process, via sending a REQUEST, to do some job in response to the message. Thus the multiple exgress targets the .connect() has built a transport relation with, are served in a round-robin mode, selecting one after another, in a fair-queue-policy mode. So the component works but for a different purpose, than you are asking it to do.
Solution
Try some more complex Formal Communication Pattern that "spreads" the message to all relevant peers ( PUB/SUB alike ) but more complex, smarter, fail-safe derived schemes, that would serve your Raspberry PI solution needs.
The greatest strength of the ZeroMQ is in that it off-loads the low-level details from you and leaves you an immense power in designing all the needed distributed scaleable Formal Communication Patterns, that you need. Forget about just the few primitives ( building blocks ) directly listed in the ZeroMQ binding. Think about your abstract message/event processing scheme and then assemble ZeroMQ elements to meet that scheme.
ZeroMQ [socket] is not a hose from A to B. It is rather an access port for dialogues with smart Formal Communication Pattern Nodes. You may benefit, that [socket] may work over many transport classes at the same time ... so your Formal Communication Patterns may span over L3-networks [TCP:] + go into [IPC:] and [INPROC:] process-to-process channels inside the [localhost].
All working in parallel ( well, sure - almost in parallel once inspected in lower detail )
All working in smooth co-integrated environment.
Where to source from?
A best next step you may do for this is IMHO to get a bit more global view, which may sound complicated for the first few things one tries to code with ZeroMQ, but if you at least jump to the page 265 of the Code Connected, Volume 1 [asPdf->], if it were not the case of reading step-by-step there.
The fastest-ever learning-curve would be to have first an un-exposed view on the Fig.60 Republishing Updates and Fig.62 HA Clone Server pair for a possible High-availability approach and then go back to the roots, elements and details.
Use PUB/SUB to send the request, and an entirely separate PUSH/PULL socket to get the answers back. The response message should probably include a field saying which Pi it has come from.
An alternate way is using PUSH/PULL instead of PUB/SUB, because with PUB/SUB method your message may be lost if a subscriber has not been executed, but in PUSH/PULL method when a Client/Sender post a message (PUSH), the Server/Getter can get it any time with the PULL attribute.
Here's a simple example:
Client side snippet code:
import zmq
def create_push_socket(ip, port):
print('PUB')
context = zmq.Context()
socket = context.socket(zmq.PUSH)
zmq_address = "tcp://{}:{}".format(ip, port)
socket.connect(zmq_address)
return socket
sock1 = create_push_socket('RPi-1-IP', RPi-1-PORT)
sock2 = create_push_socket('RPi-1-IP', RPi-1-PORT)
sock3 = create_push_socket('RPi-1-IP', RPi-1-PORT)
sock1.send('Hello')
sock2.send('Hello')
sock3.send('Hello')
Server side snippet code:
import zmq
def listen():
context = zmq.Context()
zmq_ = context.socket(zmq.PULL)
zmq_.bind('tcp://*:6667')
print(zmq_.recv())
listen()
I just used an array of req/rep pairs. Each client has multiple req sockets and each server has one rep socket. Is this not a scalable solution? The data being sent does not require high scalability. If it does be a problem, I could work something out with pub/sub.

How would I handle multiple sockets and send data between them in Python 2.7.3?

I am trying to create a server in Python 2.7.3 which sends data to all client connections whenever one client connection sends data to the server. For instance, if client c3 sent "Hello, world!" to my server, I would like to then have my server send "Hello, world!" to client connections c1 and c2. By client connections, I mean the communications sockets returned by socket.accept(). Note that I have tried using the asyncore and twisted modules, but AFAIK they do not support this. Does anybody know any way to accomplish this?
EDIT: I have seen Twisted, but I would much rather use the socket module. Is there a way (possibly multithreading, possibly using select) that I can do this using the socket module?
You can absolutely do this using Twisted Python. You just accept the connections and set up your own handling logic (of course the library does not including built-in support for your particular communication pattern exactly, but you can't expect that).

Python Socket Programming

I am developing a testbed for cloud computing environment. I want to establish multiple client connection to a server. What I want is that, server first of all send a data to all the clients specifying sending_interval and then all the clients will keep on sending their data with a time gap of that time_interval (as specified by the server). Please help me out, how can I do the same using python socket program. (i.e. I want multiple client to single server connectivity and also client sending data with the time gap specified by server). Will be great-full if anyone can help me. Thanks in advance.
This problem is easily solved by the ZeroMQ socket library. It is production stable. It allows you to define publisher-subscriber relationships, where a publishing process will publish data on a port regardless of how many (0 to infinite) listening processes there are. They call this the PUB-SUB model; it's in their docs (link below).
It sounds like you want to set up a bunch of clients that are all publishers. They can subscribe to a controlling channel, which which will send updates to their configuration (how often to write). They also act as publishers, pushing out their own data at an interval specified by default/config channel/socket.
Then, you have one or more listening processes that listen to all the clients' published messages. Perhaps you could even have two listening processes, one for backup or DR, or whatever.
We're using ZeroMQ and loving the simplicity it gives; there's no connection errors because the publisher doesn't care if anyone is listening, and the subscriber can start before the publisher and if there's nothing there to listen to, it can just loop around and wait until there is.
Bindings are available in ALL languages (it's freaky). The Python binding isn't pure-python, it does require a C compiler, but is frighteningly fast, and the pub/sub example is a cut/paste, 'golly, it works!' experience.
Link: http://zeromq.org
There are MANY other methods available with this library, including message queues, etc. They have relatively complete documentation, too.
Multi-Client and Single server Socket programming can be achieved by Multithreading in Socket Programming. I have implemented both the method:
Single Client and Single Server
Multiclient and Single Server
In my GitHub Repo Link: https://github.com/shauryauppal/Socket-Programming-Python
What is Multi-threading Socket Programming?
Multithreading is a process of executing multiple threads simultaneously in a single process.
To understand well you can visit Link: https://www.geeksforgeeks.org/socket-programming-multi-threading-python/, written by me.

Categories