Differentiating Multiple Websockets - python

I am using a library (ShareDB) for operational transformation, and the server and client side use a websocket-json-stream to communicate. However this ShareDB is being run on nodejs as a service (I'm using zerorpc to control my node processes), as my main web framework is Tornado (python). I understand from this thread that with a stateful protocol such as TCP, the connections are differentiated by the client port (so only one server port is required). And according to this response regarding how websockets handle multiple incoming requests, there is no difference in the underlying transport channel between tcp and websockets.
So my question is, if I create a websocket from the client to the python server, and then also from the client to my nodejs code (the ShareDB service) how can the server differentiate which socket goes with which? Is it the servers responsibility to only have a single socket 'listening' for a connection a given time (i.e. to first establish communication with the Python server and then to start listening for the second websocket?)

The simplest way to run two server processes on the same physical server box is to have each of them listen on a different port and then the client connects to the appropriate port on that server to indicate which server it is trying to connect to.
If you can only have one incoming port due to your server environment, then you can use something like a proxy. You still have your two servers listening on different ports, but neither one is listening on the port that is open to the outside world. The proxy listens on the one incoming port that is open to the outside world and then based on some characteristics of the incoming connection, the proxy directs that incoming connection to the appropriate server process.
The proxy can be configured to identify which process you are trying to connect to either via the URL or the DNS hostname.

Related

How does established WebSocket traffic traverse firewalls?

I understand the upgrade handshake and then the creation of the WebSocket channel on a totally different socket, but I'm puzzled as to why this is not a problem when firewalls may block all traffic except that which is bound for 80 (or 443). It seems that the WebSocket traffic hosted at its own (non-80, non-443) port would get blocked -- but clearly WebSockets are mature and effective so I must be missing something. How does the WebSocket traffic on the non-80/non-443 port traverse firewalls? Is this related (no pun intended) to routing rules ESTABLISHED/RELATED? The following seems to come close to a solid answer:
https://stackoverflow.com/a/2291861/6100445
...that is, the WebSocket traffic is established in an outbound sense from the browser over HTTP and then established in an outbound sense from the WebSocket server over the new WebSocket port?
WebSockets are "designed to work over HTTP ports 443 and 80":
https://datatracker.ietf.org/doc/html/rfc6455
(also https://en.wikipedia.org/wiki/WebSocket)
But many tutorials launch separate WebSocket servers on totally different port numbers (e.g. 8001 is used commonly). A good example is the websockets package on PyPI, which uses port 8001 and flatly states WebSockets and HTTP servers should run on separate ports:
https://websockets.readthedocs.io/en/stable/intro/tutorial1.html
https://websockets.readthedocs.io/en/stable/faq/server.html#how-do-i-run-http-and-websocket-servers-on-the-same-port
A lot of material on the Web is hand-waving and glosses over some detail with statements like WebSockets "use the same port as HTTP and therefore get through firewalls" (I assume they mean the upgrade/handshake portion) but many other sources indicate that a WebSocket server (on the same computer as the HTTP server) should be established on a different port (I assume because the HTTP server is already bound to 80 (or 443), and this different non-80/non-443 port therefore carries the upgraded WebSocket traffic). The separate ports make sense from a TCP/IP socket binding perspective. What am I missing about how WebSockets use 80 (or 443) for the upgrade/handshake, a separate port for the WebSocket established traffic, yet still work through firewalls where the only traffic allowed is that which is destined for port 80 (or 443)?

Is it possible to tunnel 2 proxy servers through websocket

A proxy server forwards HTTP traffic from client to host as shown below:
Actually, the proxy server has two jobs: (A) Receive data from client. (B) Send data to server. and vice versa.
Now what if we separate these two tasks into 2 different proxy servers and connect those 2 servers using another protocol such as websocket?
Why do I want to do this? My initial intention is to bypass internet censorship in some regions where most of the internet is blocked and only some protocols and servers (including cloud flare) are reachable. Doing this we can add a reverse proxy to our client so our proxy server B will remain anonymous.
Websocket is used here because only standard HTTP and websocket are allowed in cloud flare and not HTTP(S) proxy. And in case of blocking websocket (which sounds unlikely), we might use another intermediate like ssh, ftp, http. What are your thoughts about this? Is it possible? Is there such a proxy server out there? Or is there a better way?

UDP and TCP always use same IP for one client?

I've made a server (python, twisted) for my online game. Started with TCP, then later added constant updates with UDP (saw a big speed improvement). But now, I need to connect each UDP socket client with each TCP client.
I'm doing this by having each client first connect to the TCP server, and getting a unique ID. Then the client sends this ID to the UDP server, connecting it also. I then have a main list of TCP clients (ordered by the unique ID).
My goal is to be able to send messages to the same client over both TCP and UDP.
What is the best way to link a UDP and TCP socket to the same client?
Can I just take the IP address of a new TCP client, and send them data over UDP to that IP? Or is it necessary for the client to connect twice, once for TCP and once for UDP (by sending a 'connect' message)?
Finally, if anyone with knowledge of TCP/UDP could tell me (i'm new!), will the same client have the same IP address when connecting over UDP vs TCP (from the same machine)? (I need to know this, to secure my server, but I don't want to accidentally block some fair users)
Answering your last question: no. Because:
If client is behind NAT, and the gateway (with NAT) has more than one IP, every connection can be seen by you as connection from different IP.
Another problem is when few different clients that are behind the same NAT will connect with your server, you will have more than one pair of TCP-UDP clients. And it will be impossible to join correct pairs.
Your method seems to be good solution for the problem.
1- Can I just take the IP address of a new TCP client, and send them data over UDP to that IP? NO in the general case, but ...
2- is it necessary for the client to connect twice, once for TCP and once for UDP ? NO, definitively
3- will the same client have the same IP address when connecting over UDP vs TCP (from the same machine)? YES except in special cases
You really need some basic knowledge of the TCP, UDP and IP protocol to go further, and idealy, on the OSI model.
Basics (but you should read articles on wikipedia to have a deeper understanding) :
TCP and UDP are 2 protocol over IP
IP is a routable protocol : it can pass through routers
TCP is a connected protocol : it can pass through gateways or proxies (firewalls and NATs)
UDP in a not connected protocol : it cannot pass through gateways
a single machine may have more than one network interface (hardware slot) : each will have different IP address
a single interface may have more than one IP address
in the general case, client machines have only one network interface and one IP address - anyway you can require that a client presents same address to TCP and UDP when connecting to your server
Network Address Translation is when there is a gateway between a local network and the wild internet that always presents its own IP address and keep track of TCP connections to send back packets to the correct client
In fact the most serious problem is if there is a gateway between the client and your server. While the client and the server are two (virtual) machines for which you have direct keyboard access, no problem, but corporate networks are generally protected by a firewall acting as a NAT, and many domestic ADSL routers also include a firewall and a NAT. In that case just forget UDP. It is possible to instruct a domestic router to pass all UDP traffic to a single local IP, but it is not necessarily an easy job. In addition, that means that if a user of yours has more than one machine at home, he will be allowed to use only one at a time and will have to reconfigure his router to switch to another one !
First of all when you send data with TCP or UDP you have to give the port.
If your client connect with TCP and after your server send a response with UDP the packet will be reject by the client.
Why? Because you have to register a port for connection and you can not be sure the port is correctly open on the client.
So when you begin a connection in TCP the client open a port to send data and receive the response. You have to make the same with UDP. When client begin all communication with server you can be sure all the necessary port are open.
Don't forget to send data on the port which the connection was open.
Can I just take the IP address of a new TCP client, and send them data over UDP to that IP? Or is it necessary for the client to connect twice, once for TCP and once for UDP (by sending a 'connect' message)?
Why you don't want create 2 connections?
You have to use UDP for movement for example. because if you create an FPS you can send the player's position every 50ms so it's really important to use UDP.
It's not just a question of better connection. If you want to have a really good connection between client and server you need to use Async connection and use STREAM. But if you use stream you'r TCP socket do not signal the end of a socket but you have a better transmition. So you have to write something to show the packet end (for example <EOF>).
But you have a problem with this. Every socket you receive you have to analyze the data and split over the <EOF>. It can take a lot a processor.
With UDP the packet always have a end signal. But you need to implement a security check.

Python PPPTP library (or another way to reroute traffic through process)

I have a process that listens for traffic on a certain port, and performs some manipulation before sending it on its way to a specific server. I would like to redirect all traffic through this process. Since I do not know of a way to send traffic to a port I was thinking I could establish a PPPTP server on localhost, listen for traffic, then send it on it's way. The user would have to create a VPN with the destination being localhost:P1. The flow would be as follows:
Traffic destined for the default route is routed out the ppp tunnel interface (endpoint is localhost:P1)
Process listening on localhost:p1 gets a packet.
Process listening on localhost:p1 uses a previously established socket with server1 listening on p2 to send data.
Process listening on localhost sends data over socket to server1:p2
Response flows in reverse
I could accomplish this using a PPPTP library for Python if anyone knows of any library. Is there a better way to accomplish this?

Group chat application in python using threads or asycore

I am developing a group chat application to learn how to use sockets, threads (maybe), and asycore module(maybe).
What my thought was have a client-server architecture so that when a client connects to the server the server sends the client a list of other connects (other client 'user name', ip addres) and then a person can connect to one or more people at a time and the server would set up a P2P connection between the client(s). I have the socket part working, but the server can only handle one client connection at a time.
What would be the best, most common, practical way to go about handling multiple connections?
Do I create a new process/thread whenever I new connection comes into the server and then connect the different client connections together, or use the asycore module which from what I understand makes the server send the same data to multiple sockets(connection) and I just have to regulate where the data goes.
Any help/thoughts/advice would be appreciated.
For a group chat application, the general approach will be:
Server side (accept process):
Create the socket, bind it to a well known port (and on appropriate interface) and listen
While (app_running)
Client_socket = accept (using serverSocket)
Spawn a new thread and pass this socket to the thread. That thread handles the client that just connected.
Continue, so that server can continue to accept more connections.
Server-side client mgmt Thread:
while app_running:
read the incoming message, and store to a queue or something.
continue
Server side (group chat processing):
For all connected clients:
check their queues. If any message present, send that to ALL the connected clients (including the client that sent this message -- serves as ACK sort of)
Client side:
create a socket
connect to server via IP-address, and port
do send/receive.
There can be lots of improvement on the above. Like the server could poll the sockets or use "select" operation on a group of sockets. That would make it efficient in the sense that having a separate thread for each connected client will be an overdose when there are many. (Think ~1MB per thread for stack).
PS: I haven't really used asyncore module. But I am just guessing that you would notice some performance improvement when you have lots of connected clients and very less processing.

Categories