I'm currently making a guessing game in Python and I'm trying to use select.select to allow multiple clients to connect to my server but I cannot wrap my head around how to use select.select. I've look all over the internet but all the tutorials I've come across are for chat servers which I can't seem to relate to.
I was just wondering how I'd let multiple clients connect to my server through select.select. And also how would I send/receive data to/from individual clients using select.select
I've look all over the internet but all the tutorials I've come across
are for chat servers which I can't seem to relate to.
There's no difference between a chat server and game server regarding the use of select.select.
I was just wondering how I'd let multiple clients connect to my server
through select.select.
You'd pass the server socket (which you called listen on) in the rlist argument to select; if after return from select the server socket is in the first list (the objects that are ready for reading) of the returned triple of lists, you'd call accept on the server socket and thus get the new client socket, which you'd append to the rlist in subsequent select calls.
And also how would I send/receive data to/from individual clients
using select.select
If after return from select a client socket is in the first list (the objects that are ready for reading) of the returned triple of lists, you'd receive data by calling recv on that client socket.
You don't need to use select for writing; you'd just send data by calling send.
See the question "Handle multiple requests with select" for an example server.
Related
Key points:
I need to send roughly ~100 float numbers every 1-30 seconds from one machine to another.
The first machine is catching those values through sensors connected to it.
The second machine is listening for them, passing them to an http server (nginx), a telegram bot and another program sending emails with alerts.
How would you do this and why?
Please be accurate. It's the first time I work with sockets and with python, but I'm confident I can do this. Just give me crucial details, lighten me up!
Some small portion (a few rows) of the core would be appreciated if you think it's a delicate part, but the main goal of my question is to see the big picture.
Main thing here is to decide on a connection design and to choose protocol. I.e. will you have a persistent connection to your server or connect each time when new data is ready to it.
Then will you use HTTP POST or Web Sockets or ordinary sockets. Will you rely exclusively on nginx or your data catcher will be another serving service.
This would be a most secure way, if other people will be connecting to nginx to view sites etc.
Write or use another server to run on another port. For example, another nginx process just for that. Then use SSL (i.e. HTTPS) with basic authentication to prevent anyone else from abusing the connection.
Then on client side, make a packet every x seconds of all data (pickle.dumps() or json or something), then connect to your port with your credentials and pass the packet.
Python script may wait for it there.
Or you write a socket server from scratch in Python (not extra hard) to wait for your packets.
The caveat here is that you have to implement your protocol and security. But you gain some other benefits. Much more easier to maintain persistent connection if you desire or need to. I don't think it is necessary though and it can become bulky to code break recovery.
No, just wait on some port for a connection. Client must clearly identify itself (else you instantly drop the connection), it must prove that it talks your protocol and then send the data.
Use SSL sockets to do it so that you don't have to implement encryption yourself to preserve authentication data. You may even rely only upon in advance built keys for security and then pass only data.
Do not worry about the speed. Sockets are handled by OS and if you are on Unix-like system you may connect as many times you want in as little time interval you need. Nothing short of DoS attack won't inpact it much.
If on Windows, better use some finished server because Windows sometimes do not release a socket on time so you will be forced to wait or do some hackery to avoid this unfortunate behaviour (non blocking sockets and reuse addr and then some flo control will be needed).
As far as your data is small you don't have to worry much about the server protocol. I would use HTTPS myself, but I would write myown light-weight server in Python or modify and run one of examples from internet. That's me though.
The simplest thing that could possibly work would be to take your N floats, convert them to a binary message using struct.pack(), and then send them via a UDP socket to the target machine (if it's on a single LAN you could even use UDP multicast, then multiple receivers could get the data if needed). You can safely send a maximum of 60 to 170 double-precision floats in a single UDP datagram (depending on your network).
This requires no application protocol, is easily debugged at the network level using Wireshark, is efficient, and makes it trivial to implement other publishers or subscribers in any language.
I am trying to create a server in Python 2.7.3 which sends data to all client connections whenever one client connection sends data to the server. For instance, if client c3 sent "Hello, world!" to my server, I would like to then have my server send "Hello, world!" to client connections c1 and c2. By client connections, I mean the communications sockets returned by socket.accept(). Note that I have tried using the asyncore and twisted modules, but AFAIK they do not support this. Does anybody know any way to accomplish this?
EDIT: I have seen Twisted, but I would much rather use the socket module. Is there a way (possibly multithreading, possibly using select) that I can do this using the socket module?
You can absolutely do this using Twisted Python. You just accept the connections and set up your own handling logic (of course the library does not including built-in support for your particular communication pattern exactly, but you can't expect that).
I am trying to write a VPN server,that multiple clients can connect to each other on a virtual network.
So i need a threaded server to send and receive data to/from clients concurrently.
A tunnel interface may be created for each client, that represents the client's virtual interface on server.
I have two solutions for using select() function to read/write from/to tunnel on server:
Using a single thread that calls select([tun0,tun1,tun2],[tun0,tun1,tun2],[]) function for all Tunnels, and using buffers to hold-up traffic.
Calling select([tun0],[tun0],[]) function separately on specific client's thread.
My question is: which way is better ?
I am developing a group chat application to learn how to use sockets, threads (maybe), and asycore module(maybe).
What my thought was have a client-server architecture so that when a client connects to the server the server sends the client a list of other connects (other client 'user name', ip addres) and then a person can connect to one or more people at a time and the server would set up a P2P connection between the client(s). I have the socket part working, but the server can only handle one client connection at a time.
What would be the best, most common, practical way to go about handling multiple connections?
Do I create a new process/thread whenever I new connection comes into the server and then connect the different client connections together, or use the asycore module which from what I understand makes the server send the same data to multiple sockets(connection) and I just have to regulate where the data goes.
Any help/thoughts/advice would be appreciated.
For a group chat application, the general approach will be:
Server side (accept process):
Create the socket, bind it to a well known port (and on appropriate interface) and listen
While (app_running)
Client_socket = accept (using serverSocket)
Spawn a new thread and pass this socket to the thread. That thread handles the client that just connected.
Continue, so that server can continue to accept more connections.
Server-side client mgmt Thread:
while app_running:
read the incoming message, and store to a queue or something.
continue
Server side (group chat processing):
For all connected clients:
check their queues. If any message present, send that to ALL the connected clients (including the client that sent this message -- serves as ACK sort of)
Client side:
create a socket
connect to server via IP-address, and port
do send/receive.
There can be lots of improvement on the above. Like the server could poll the sockets or use "select" operation on a group of sockets. That would make it efficient in the sense that having a separate thread for each connected client will be an overdose when there are many. (Think ~1MB per thread for stack).
PS: I haven't really used asyncore module. But I am just guessing that you would notice some performance improvement when you have lots of connected clients and very less processing.
I need a way to simulate connectivity problems in an automated test suite, on Linux, and preferably from Python. Some sort of proxy that I can put in front of the web server that can hang or drop connections after one trigger or another (after X bytes transferred, etc) would be perfect.
It doesn't seem too hard to build, but I'd rather grab something pre-existing, if anyone has any good recommendations.
when i needed one, i found that building it yourself is the best thing..
start by raising a threaded server in python http://docs.python.org/dev/library/socketserver.html (you don't have to use the class itself).
and it's very simple:
in the new connection thread, you create a new socket and connects it to the real server.
then, you put both of them in a list and sends it to select.select (import select).
then, when socket x receive data - sends it to y. when socket y receives data sends it to x. (don't forget to close the socket when you receive empty string).
now you can do whatever you want..
if you need anything, i'm here..