I am trying to write a VPN server,that multiple clients can connect to each other on a virtual network.
So i need a threaded server to send and receive data to/from clients concurrently.
A tunnel interface may be created for each client, that represents the client's virtual interface on server.
I have two solutions for using select() function to read/write from/to tunnel on server:
Using a single thread that calls select([tun0,tun1,tun2],[tun0,tun1,tun2],[]) function for all Tunnels, and using buffers to hold-up traffic.
Calling select([tun0],[tun0],[]) function separately on specific client's thread.
My question is: which way is better ?
Related
I have a client written in python communicate with a server written in Go via TCP Socket. Currently, I create a new socket object and then connect to Go server every time. More specifically, suppose my go server listens on localhost:4040, whenever I connect to from python server I has a different source address (localhost:6379, 6378, ...) . I wonder if there is a way to leverage on old connections (like a connection pool) rather than creating the new one every time. If so, how to determine the connection has finished and became idle , do I need an extra ACK message for that? Thanks.
I am using a library (ShareDB) for operational transformation, and the server and client side use a websocket-json-stream to communicate. However this ShareDB is being run on nodejs as a service (I'm using zerorpc to control my node processes), as my main web framework is Tornado (python). I understand from this thread that with a stateful protocol such as TCP, the connections are differentiated by the client port (so only one server port is required). And according to this response regarding how websockets handle multiple incoming requests, there is no difference in the underlying transport channel between tcp and websockets.
So my question is, if I create a websocket from the client to the python server, and then also from the client to my nodejs code (the ShareDB service) how can the server differentiate which socket goes with which? Is it the servers responsibility to only have a single socket 'listening' for a connection a given time (i.e. to first establish communication with the Python server and then to start listening for the second websocket?)
The simplest way to run two server processes on the same physical server box is to have each of them listen on a different port and then the client connects to the appropriate port on that server to indicate which server it is trying to connect to.
If you can only have one incoming port due to your server environment, then you can use something like a proxy. You still have your two servers listening on different ports, but neither one is listening on the port that is open to the outside world. The proxy listens on the one incoming port that is open to the outside world and then based on some characteristics of the incoming connection, the proxy directs that incoming connection to the appropriate server process.
The proxy can be configured to identify which process you are trying to connect to either via the URL or the DNS hostname.
Imagine you have two python processes, one server and one client, that interact with each other.
Both processes/programs run on the same host and communicate via TCP, eg. by using the AMP protocol of the twisted framework.
Could you think of an efficient and smart way how both python programs can authenticate each other?
What I want to achieve is, that for instance the server only accepts a connection from an authentic client and where not allowed third party processes can connect to the server.
I want to avoid things like public-key cryptography or SSL-connections because of the huge overhead.
If you do not want to use SSL - there are a few options:
Client must send some authentication token (you may call it password) to server as a one of the first bunch of data sent through the socket. This is the simplest way. Also this way is cross-platform.
Client must send id of his process (OS-specific). Then server must make some system calls to determine path to executable file of this client process. If it is a valid path - client will be approved. For example valid path should be '/bin/my_client' or "C:\Program Files\MyClient\my_client.exe" and if some another client (let's say with path '/bin/some_another_app' will try to communicate with your server - it will be rejected. But I think it is also overhead. Also implementation is OS-specific.
I'm currently making a guessing game in Python and I'm trying to use select.select to allow multiple clients to connect to my server but I cannot wrap my head around how to use select.select. I've look all over the internet but all the tutorials I've come across are for chat servers which I can't seem to relate to.
I was just wondering how I'd let multiple clients connect to my server through select.select. And also how would I send/receive data to/from individual clients using select.select
I've look all over the internet but all the tutorials I've come across
are for chat servers which I can't seem to relate to.
There's no difference between a chat server and game server regarding the use of select.select.
I was just wondering how I'd let multiple clients connect to my server
through select.select.
You'd pass the server socket (which you called listen on) in the rlist argument to select; if after return from select the server socket is in the first list (the objects that are ready for reading) of the returned triple of lists, you'd call accept on the server socket and thus get the new client socket, which you'd append to the rlist in subsequent select calls.
And also how would I send/receive data to/from individual clients
using select.select
If after return from select a client socket is in the first list (the objects that are ready for reading) of the returned triple of lists, you'd receive data by calling recv on that client socket.
You don't need to use select for writing; you'd just send data by calling send.
See the question "Handle multiple requests with select" for an example server.
I am developing a testbed for cloud computing environment. I want to establish multiple client connection to a server. What I want is that, server first of all send a data to all the clients specifying sending_interval and then all the clients will keep on sending their data with a time gap of that time_interval (as specified by the server). Please help me out, how can I do the same using python socket program. (i.e. I want multiple client to single server connectivity and also client sending data with the time gap specified by server). Will be great-full if anyone can help me. Thanks in advance.
This problem is easily solved by the ZeroMQ socket library. It is production stable. It allows you to define publisher-subscriber relationships, where a publishing process will publish data on a port regardless of how many (0 to infinite) listening processes there are. They call this the PUB-SUB model; it's in their docs (link below).
It sounds like you want to set up a bunch of clients that are all publishers. They can subscribe to a controlling channel, which which will send updates to their configuration (how often to write). They also act as publishers, pushing out their own data at an interval specified by default/config channel/socket.
Then, you have one or more listening processes that listen to all the clients' published messages. Perhaps you could even have two listening processes, one for backup or DR, or whatever.
We're using ZeroMQ and loving the simplicity it gives; there's no connection errors because the publisher doesn't care if anyone is listening, and the subscriber can start before the publisher and if there's nothing there to listen to, it can just loop around and wait until there is.
Bindings are available in ALL languages (it's freaky). The Python binding isn't pure-python, it does require a C compiler, but is frighteningly fast, and the pub/sub example is a cut/paste, 'golly, it works!' experience.
Link: http://zeromq.org
There are MANY other methods available with this library, including message queues, etc. They have relatively complete documentation, too.
Multi-Client and Single server Socket programming can be achieved by Multithreading in Socket Programming. I have implemented both the method:
Single Client and Single Server
Multiclient and Single Server
In my GitHub Repo Link: https://github.com/shauryauppal/Socket-Programming-Python
What is Multi-threading Socket Programming?
Multithreading is a process of executing multiple threads simultaneously in a single process.
To understand well you can visit Link: https://www.geeksforgeeks.org/socket-programming-multi-threading-python/, written by me.