I have a process that listens for traffic on a certain port, and performs some manipulation before sending it on its way to a specific server. I would like to redirect all traffic through this process. Since I do not know of a way to send traffic to a port I was thinking I could establish a PPPTP server on localhost, listen for traffic, then send it on it's way. The user would have to create a VPN with the destination being localhost:P1. The flow would be as follows:
Traffic destined for the default route is routed out the ppp tunnel interface (endpoint is localhost:P1)
Process listening on localhost:p1 gets a packet.
Process listening on localhost:p1 uses a previously established socket with server1 listening on p2 to send data.
Process listening on localhost sends data over socket to server1:p2
Response flows in reverse
I could accomplish this using a PPPTP library for Python if anyone knows of any library. Is there a better way to accomplish this?
Related
A proxy server forwards HTTP traffic from client to host as shown below:
Actually, the proxy server has two jobs: (A) Receive data from client. (B) Send data to server. and vice versa.
Now what if we separate these two tasks into 2 different proxy servers and connect those 2 servers using another protocol such as websocket?
Why do I want to do this? My initial intention is to bypass internet censorship in some regions where most of the internet is blocked and only some protocols and servers (including cloud flare) are reachable. Doing this we can add a reverse proxy to our client so our proxy server B will remain anonymous.
Websocket is used here because only standard HTTP and websocket are allowed in cloud flare and not HTTP(S) proxy. And in case of blocking websocket (which sounds unlikely), we might use another intermediate like ssh, ftp, http. What are your thoughts about this? Is it possible? Is there such a proxy server out there? Or is there a better way?
I use a TCP server in python, that implements this class:
class ThreadedTCPServer(SocketServer.ThreadingTCPServer):
pass
The normal use of it works perfect (initiating the server, handling requests and so on).
Now- I need to send a message to the clients, outside of the handle function in the TcpRequestHandler(SocketServer.BaseRequestHandler) class.
I tried to use the following trick, of using the internal socket of the server (it works for UDP)-
tcp_server.client_socket.send(message)
But I get this error message-
socket.error: [Errno 10057] A request to send or receive data was disallowed because the socket is not connected and (when sending on a datagram socket using a sendto call) no address was supplied
So I assume it is not possible for TCP.
Is there any other way to do it?
I assume some servers need to send messages to their client sometimes (that are not just responses to requests), but I couldn't find a good way.
Thanks!
You have two general options with TCP:
Send a message to the client out of band (OOB). In this, the server connects separately to the client and the roles are reversed. The client has to listen on a port for OOB messages and acts as a server in this regard. From your problem description you don’t want to do this, which is fine.
Implement a protocol where the server can send messages to the client in response to incoming messages. You will need a way to multiplex the extra messages along with any expected return value to the initiating message. You could implement this with a shared queue on your server. You put messages into this queue outside of your handler and then when the handler is responding to messages you consume from the queue and insert them into the response.
If that sounds like something you are interested in I could write some example code later.
There are pros & cons between both approaches:
In (1) you have more socket connections to manage and you expose the client host to connections which you might not desire. The protocols are simpler because they are not multiplexed.
In (2) you only have a single TCP stream but you have to multiplex your OOB message. You also have increased latency if the client is not regularly contacting the server.
Hope that helps.
I am using a library (ShareDB) for operational transformation, and the server and client side use a websocket-json-stream to communicate. However this ShareDB is being run on nodejs as a service (I'm using zerorpc to control my node processes), as my main web framework is Tornado (python). I understand from this thread that with a stateful protocol such as TCP, the connections are differentiated by the client port (so only one server port is required). And according to this response regarding how websockets handle multiple incoming requests, there is no difference in the underlying transport channel between tcp and websockets.
So my question is, if I create a websocket from the client to the python server, and then also from the client to my nodejs code (the ShareDB service) how can the server differentiate which socket goes with which? Is it the servers responsibility to only have a single socket 'listening' for a connection a given time (i.e. to first establish communication with the Python server and then to start listening for the second websocket?)
The simplest way to run two server processes on the same physical server box is to have each of them listen on a different port and then the client connects to the appropriate port on that server to indicate which server it is trying to connect to.
If you can only have one incoming port due to your server environment, then you can use something like a proxy. You still have your two servers listening on different ports, but neither one is listening on the port that is open to the outside world. The proxy listens on the one incoming port that is open to the outside world and then based on some characteristics of the incoming connection, the proxy directs that incoming connection to the appropriate server process.
The proxy can be configured to identify which process you are trying to connect to either via the URL or the DNS hostname.
I've made a server (python, twisted) for my online game. Started with TCP, then later added constant updates with UDP (saw a big speed improvement). But now, I need to connect each UDP socket client with each TCP client.
I'm doing this by having each client first connect to the TCP server, and getting a unique ID. Then the client sends this ID to the UDP server, connecting it also. I then have a main list of TCP clients (ordered by the unique ID).
My goal is to be able to send messages to the same client over both TCP and UDP.
What is the best way to link a UDP and TCP socket to the same client?
Can I just take the IP address of a new TCP client, and send them data over UDP to that IP? Or is it necessary for the client to connect twice, once for TCP and once for UDP (by sending a 'connect' message)?
Finally, if anyone with knowledge of TCP/UDP could tell me (i'm new!), will the same client have the same IP address when connecting over UDP vs TCP (from the same machine)? (I need to know this, to secure my server, but I don't want to accidentally block some fair users)
Answering your last question: no. Because:
If client is behind NAT, and the gateway (with NAT) has more than one IP, every connection can be seen by you as connection from different IP.
Another problem is when few different clients that are behind the same NAT will connect with your server, you will have more than one pair of TCP-UDP clients. And it will be impossible to join correct pairs.
Your method seems to be good solution for the problem.
1- Can I just take the IP address of a new TCP client, and send them data over UDP to that IP? NO in the general case, but ...
2- is it necessary for the client to connect twice, once for TCP and once for UDP ? NO, definitively
3- will the same client have the same IP address when connecting over UDP vs TCP (from the same machine)? YES except in special cases
You really need some basic knowledge of the TCP, UDP and IP protocol to go further, and idealy, on the OSI model.
Basics (but you should read articles on wikipedia to have a deeper understanding) :
TCP and UDP are 2 protocol over IP
IP is a routable protocol : it can pass through routers
TCP is a connected protocol : it can pass through gateways or proxies (firewalls and NATs)
UDP in a not connected protocol : it cannot pass through gateways
a single machine may have more than one network interface (hardware slot) : each will have different IP address
a single interface may have more than one IP address
in the general case, client machines have only one network interface and one IP address - anyway you can require that a client presents same address to TCP and UDP when connecting to your server
Network Address Translation is when there is a gateway between a local network and the wild internet that always presents its own IP address and keep track of TCP connections to send back packets to the correct client
In fact the most serious problem is if there is a gateway between the client and your server. While the client and the server are two (virtual) machines for which you have direct keyboard access, no problem, but corporate networks are generally protected by a firewall acting as a NAT, and many domestic ADSL routers also include a firewall and a NAT. In that case just forget UDP. It is possible to instruct a domestic router to pass all UDP traffic to a single local IP, but it is not necessarily an easy job. In addition, that means that if a user of yours has more than one machine at home, he will be allowed to use only one at a time and will have to reconfigure his router to switch to another one !
First of all when you send data with TCP or UDP you have to give the port.
If your client connect with TCP and after your server send a response with UDP the packet will be reject by the client.
Why? Because you have to register a port for connection and you can not be sure the port is correctly open on the client.
So when you begin a connection in TCP the client open a port to send data and receive the response. You have to make the same with UDP. When client begin all communication with server you can be sure all the necessary port are open.
Don't forget to send data on the port which the connection was open.
Can I just take the IP address of a new TCP client, and send them data over UDP to that IP? Or is it necessary for the client to connect twice, once for TCP and once for UDP (by sending a 'connect' message)?
Why you don't want create 2 connections?
You have to use UDP for movement for example. because if you create an FPS you can send the player's position every 50ms so it's really important to use UDP.
It's not just a question of better connection. If you want to have a really good connection between client and server you need to use Async connection and use STREAM. But if you use stream you'r TCP socket do not signal the end of a socket but you have a better transmition. So you have to write something to show the packet end (for example <EOF>).
But you have a problem with this. Every socket you receive you have to analyze the data and split over the <EOF>. It can take a lot a processor.
With UDP the packet always have a end signal. But you need to implement a security check.
I am developing a group chat application to learn how to use sockets, threads (maybe), and asycore module(maybe).
What my thought was have a client-server architecture so that when a client connects to the server the server sends the client a list of other connects (other client 'user name', ip addres) and then a person can connect to one or more people at a time and the server would set up a P2P connection between the client(s). I have the socket part working, but the server can only handle one client connection at a time.
What would be the best, most common, practical way to go about handling multiple connections?
Do I create a new process/thread whenever I new connection comes into the server and then connect the different client connections together, or use the asycore module which from what I understand makes the server send the same data to multiple sockets(connection) and I just have to regulate where the data goes.
Any help/thoughts/advice would be appreciated.
For a group chat application, the general approach will be:
Server side (accept process):
Create the socket, bind it to a well known port (and on appropriate interface) and listen
While (app_running)
Client_socket = accept (using serverSocket)
Spawn a new thread and pass this socket to the thread. That thread handles the client that just connected.
Continue, so that server can continue to accept more connections.
Server-side client mgmt Thread:
while app_running:
read the incoming message, and store to a queue or something.
continue
Server side (group chat processing):
For all connected clients:
check their queues. If any message present, send that to ALL the connected clients (including the client that sent this message -- serves as ACK sort of)
Client side:
create a socket
connect to server via IP-address, and port
do send/receive.
There can be lots of improvement on the above. Like the server could poll the sockets or use "select" operation on a group of sockets. That would make it efficient in the sense that having a separate thread for each connected client will be an overdose when there are many. (Think ~1MB per thread for stack).
PS: I haven't really used asyncore module. But I am just guessing that you would notice some performance improvement when you have lots of connected clients and very less processing.