Python client-server - tell if client offline - python

My basic problem is that I am looking for a way for multiple clients to connect to a server over the internet, and for the server to be able to tell if those clients are online or offline.
My current way of doing this is a python socket server, and python clients, which send the server a small message every 2 seconds. The server checks each client to see if it has received such a message in the last 5 seconds, and if not, the client is marked as offline.
However, I feel that is is probably not the best way of doing this, and even if it is, there might be a library that does this for me. I have looked for such a library but have come up empty handed.
Does anyone know of a better way of doing this, or a library which can automatically check the status of multiple connected clients?
Note: by "offline", I mean that the client could be powered off, network connection disconnected or program quit.

Assuming you are not after ping from server to client. I believe that your approach is fine. Very ofther server will not be able to hit client but it works otherway around. You may run out of resources if you have many connected clients.
Also over this established channel you can send other data/metrics and boom monitoring was born ;-) IF you send other data you will probably reliaze you don't need to send data every 2 secs but only if no other data was sent - boom FIX works this way ( and many other messaging protocol)
What you may like is something like kafka that will transport the messages for you there are other messaging protocols too.. and they scale better then if you just connect all client(assuming you have many of them)
Happy messaging

Good Morning i am working a same project and i want to post my approach.
When a client is connected to my server with client, address = sock.accept() we can take his ip with ip_client = address[0]. Assuming you have a list with the connected ips you can append the ip with connected_clients.append(ip_client)
finally you have a list with the connected ips
make a Thread or inside an infinitive loop write the following code
for connected in connected_clients:
response = os.system("ping -c 1 " + connected)
if response == True:
continue
else:
connected_clients.remove(connected)
don't forget to the input os command on the beginning and you have made a beckon of connected clients

Related

Redirecting HTTP requests to device without static/public IP

I'm using a service that sends me some data from user over webhooks. If there is any user interaction on this service, it hits my URL with HTTP request, with the data in POST/GET, and then expects text/json response to show back to the user. The response has to be in few seconds, otherwise the HTTP request times out and the service has no way of finding out what should be the response to the user.
The problem here is that now I'm not processing these data on my server with public IP, but I need to do it on my RPi, which keeps moving, which meains it has different IP every few hours, and mostly not public.
I'm sure I will still need to use the server with public IP to redirect these requests to my RPi, and I have few ideas, but I don't know what is reliable or if it even would work.
Let the API talk to my server and save the data. Then have the RPi constantly asking my server if there are any new data. Propably the dumbest idea - not ideal to use over metered connection, propably longer reply, and it will be harder to return the RPi's reply in the HTTP request made from API.
Having (Python) script running on my server, that will a) serve as socket server and RPi will connect to this socket, and b) have running SimpleHTTPRequestHandler to process requests from API and send them to the socket, the reply with RPi's reply. Propably easy way to keep connection between my server and RPi, allowing me to pass data in both directions.
Open SSH tunnel between the RPi and my server. This way, I could process the requests from service directly on my RPi. But how reliable is this solution? (Keeping it alive, opening the tunnel automatically, etc, propably question for superuser forum)
I'm thinking of going with choice 3 if it will be possible, but first I'd like to hear what you guys think. Is this a good and reliable idea? Or are there any better ways I don't know about? Or did anybody already faced this problem?
To sum it up:
Something sends HTTP request to public IP. I need to process this request (and reply to it) in Python script on device without public IP. I have a server with public IP that could be used as a bridge. I much don't care what will run on the server, if it will be able to redirect these requests.
Thanks

Telnet server: is it good practice to keep connections open?

I'm working in a NetHack clone that is supposed to be playing through Telnet, like many NetHack servers. As I've said, this is a clone, so it's being written from scratch, on Python.
I've set up my socket server reusing code from a SMTP server I wrote a while ago and all of suddenly my attention jumped to this particular line of code:
s.listen(15)
My server was designed to be able to connect to 15 simultaneous clients just in case the data exchange with any took too long, but ideally listen(1) or listen(2) would be enough. But this case is different.
As it happens with Alt.org when you telnet their NetHack servers, people connected to my server should be able to play my roguelike remotely, through a single telnet session, so I guess this connection should not be interrupted. Yet, I've read here that
[...] if you are really holding more than 128 queued connect requests you are
a) taking too long to process them or b) need a heavy-weight
distributed server or c) suffering a DDoS attack.
What is the better practice to carry out here? Should I keep every connection open until the connected user disconnects or is there any other way? Should I go for listen(128) (or whatever my system's socket.SOMAXCONN is) or is that a bad practice?
number in listen(number) request limits number of pending connect requests.
Connect request is pending from initial SYN request received by OS until you called accept socket method. So number does not limits open (established) connection number but it limits number of connections in SYN_RECV state.
It is bad idea not to answer on incoming connection because:
Client will retransmit SYN requests until answer SYN is received
Client can not distinguish situation when your server is not available and it just in queue.
Better idea is to answer on connection but send some message to client with rejection reason and then close connection.

Creating Multi-user chat with sockets on python, how to handle the departure of the server?

I was trying to implement a multiuser chat (group chat) with socket on python.
It basically works like this: Each messages that a user send is received by the server and the server sends it back to the rest of the users.
The problem is that if the server close the program, it crashes for everyone else.
So, how can you handle the departure of the server, should you change the server somehow, or there is other way around it?
Thank you
could you make your server log for heartbeats? and also post heartbeats to the clients on the socket?
if so, have a monitor check for the server heartbeats and restart the server application if the heartbeats exceed the threshold value.
also, check for heartbeats on the client and reestablish connection when you did not hear a heartbeat.

Python Twisted client not able to receive response from server

I have a client written using python-twisted (http://pastebin.com/X7UYYLWJ) which sends a UDP packet to a UDP Server written in C using libuv. When the client sends a packet to the server, it is successfully received by the server and it sends a response back to the python client. But the client not receiving any response, what could be the reason ?
Unfortunately for you, there are many possibilities.
Your code uses connect to set up a "connected UDP" socket. Connected UDP sockets filter the packets they receive. If packets are received from any address other than the one to which the socket is connected, they are dropped. It may be that the server sends its responses from a different address than you've connected to (perhaps it uses another port or perhaps it is multi-homed and uses a different IP).
Another possibility is that a NAT device is blocking the return packets. UDP NAT hole punching has come a long way but it's still not perfect. It could be that the server's response arrives at the NAT device and gets discarded or misrouted.
Related to this is the possibility that an intentionally configured firewall is blocking the return packets.
Another possibility is that the packets are simply lost. UDP is not a reliable protocol. A congested router, faulty networking gear, or various other esoteric (often transient) concerns might be resulting in the packet getting dropped at some point, instead of forwarded to the next hop.
Your first step in debugging this should be to make your application as permissive as possible. Get rid of the use of connected UDP so that all packets that make it to your process get delivered to your application code.
If that doesn't help, use tcpdump or wireshark or a similar tool to determine if the packets make it to your computer at all. If they do but your application isn't seeing them, look for a local firewall configuration that might reject them.
If they're not making it to your computer, see if they make it to your router. Use whatever diagnostic tools are available (along the lines of tcpdump) on your router to see whether packets make it that far or not. Or if there are no such tools, remove the router from the equation. If you see packets making it to your router but no further, look for firewall or NAT configuration issues there.
If packets don't make it as far as your router, move to the next hop you have access to. This is where things might get difficult since you may not have access to the next hop or the next hop might be the server (with many intervening hops - which you have to just hope are all working).
Does the server actually generate a reply? What addressing information is on that reply? Does it match the client's expectations? Does it get dropped at the server's outgoing interface because of congestion or a firewall?
Hopefully you'll discover something interesting at one of these steps and be able to fix the problem.
I had a similar problem. The problem was windows firewall. In firewall allowed programs settings, allowing the communication for pythonw/python did solve the problem. My python program was:
from socket import *
import time
address = ( '192.168.1.104', 42) #Defind who you are talking to (must match arduino IP and port)
client_socket = socket(AF_INET, SOCK_DGRAM) #Set Up the Socket
client_socket.bind(('', 45)) # arduino sending to port 45
client_socket.settimeout(1) #only wait 1 second for a response
data = "xyz"
client_socket.sendto(data, address)
try:
rec_data, addr = client_socket.recvfrom(2048) #Read response from arduino
print rec_data #Print the response from Arduino
except:
pass
while(1):
pass

Clustering TCP servers, so can send data to all clients

Important note:
I've asked this question already on ServerFault: https://serverfault.com/questions/349065/clustering-tcp-servers-so-can-send-data-to-all-clients, but I'd also like a programmers perspective on the problem.
I'm developing a real-time mobile app by setting up a TCP connection between the app and server backend. Each user can send messages to all other users.
(I'm making the TCP server in Python with Twisted, am creating my own 'protocol' for communication between the app/backend and hosting it on Amazon Web Services.)
Currently I'm trying to make the backend scalable (and reliable). As far as I can tell, the system could cope with more users by upgrading to a bigger server (which could become rather limiting), or by adding new servers in a cluster configuration - i.e. having several servers sitting behind a load balancer, probably with 1 database they all access.
I have sketched out the rough architecture of this:
However what if the Red user sends a message to all other connected users? Red's server has a TCP connection with Red, but not with Green.
I can think of a one way to deal with this problem:
Each server could have an open TCP (or SSL) connection with each other server. When one server wants to send a message to all users it simply passes this along it's connection to the other servers. A record could be kept in the database of which servers are online (and their IP address), and one of the servers could be a boss - i.e. decides if others are up and running, if not it could remove them from the database (if a server was up and lost it's connection to the boss it could check the database and see if it had been removed, and restart if it had - else it could assume the boss was down.)
Clearly this needs refinement but shows the general principle.
Alternatively I'm not sure if this is possible (- definitely seems like wishful thinking on my part):
Perhaps users could just connect to a box or router, and all servers could message all users through it?
If you know how to cluster TCP servers effectively, or a design pattern that provides a solution, or have any comments at all, then I would be very grateful. Thank you :-)
You need to decide (or if you already did this - to share these decisions with us) reliability requirements for your system: should all messages be sent to all users in any case (e.g. one or more servers crashed), can you tolerate sending the same message twice to the same user on server crash? Your system complexity depends directly on these decisions.
The simplest version is when a message is not delivered to all users on server crash. All your servers keep TCP connection to each other. One of them receives a message from a user and sends it to all other connected users (to this server) and to all other connected servers. Other servers send this message to all their users. To scale the system you just run additional server which connects to all existing servers.
Have a look how it is handled with IRC servers. They essentially can do this already. Everbody can send to everybody else, on all servers. Or just to single users, also on another server. And to groups, called "channels". It works best by routing amongst the servers.
It's not that hard, if you can make sure the servers know each other and can talk to each other.
On a side note: At 9/11, the most reliable internet news source was the IRC network. All the www sites were down because of bandwidth; it took them ages to even get a plain-text web page back up. During this time, IRC networks were able to provide near real-time, moderated news channels across the atlantic. You maybe could no longer log into a server on the other side, but at least the servers were able to keep up a server-to-server connection across.
An obvious choice is to use the DB as a clearinghouse for messages. You have to store incoming messages somewhere anyway, lest they be lost if a server suddenly crashes. Put incoming messages into the central database and have notification processes on the TCP servers grab the messages and send them to the correct users.
TCP server cannot be clustered, the snapshot you put here is a classic HTTP server example.
Since the device will send TCP connection to server, say, pure socket, there will be noway of establishing a load-balancing server.

Categories