I had found code for peer-to-peer chat, but the thorny problems are :
1- Is there is automatic way for the srver and client to get the IP of each other (often dynamic IP) ?
N.B: I read this questions which use an intermediate server but I can't design my own server, so I am searching for another idea:
Creating Peer to Peer connections using intermediate server
Writing a simple P2P chat application
2- Can hackers use the port opened by the chat program ?
One way to locate another peer in a local area network is to broadcast a specifically constructed packet to the whole IPv4 subnet using the broadcast address. Then the peer client can be written to respond to the host who broadcasted the message and make a connection. A perfect example of an application that uses this method is Dropbox. Dropbox uses what they call LAN sync that allows for files to be transfered from peer to peer if that file is present in a dropbox on a host within the LAN. If you fire up wireshark, you can see the LAN sync messages being broadcasted from the broadcast address.
Hackers can use any remote communication protocol to exploit an application if there is a vulnerability present. The best way avoid this is by using secure coding practices and end-to-end encryption. It's not the ports necessarily being open or close that you need to worry about, but the code sitting at the application layer.
Related
I am trying to create a chess game between two different computers that are not in the same LAN. I am having trouble connecting the two via a TCP connection (UDP would probably be sufficient as well if the packets are arriving, but ideally TCP).
I am new to a lot of networking and am unaware of many different tools that may be useful and I am also in university and therefore don't have control over the router to update firewall rules. What can I do to work around the router firewall to connect the two devices.
I am primarily using the Python socket library at the moment to implement the connection.
Any information about how I can send messages between the two computers outside of a LAN would be very useful. Thank you for your help!
I have ensured that the client side is using the public IP of the server and the server is using "" for its socket host. I also checked that the connection was working when utilizing a LAN without issue. I included a batch file that enables the specific port used for the game at the beginning of runtime and disables it at the end of the program. If I am not mistaken, that only impacts the computer's firewall rules not the router's. I have looked into receive the packets through port 80 and redirecting it to my specific program, but was unsuccesful in finding a solution of that type.
If the server is behind a router/firewall you'll have to use some sort of hole punching method to create the connection. STUN is one of the most common, though I've never actually used it in a Python program so I don't know what Python implementations are out there.
I have an embedded system on which I can connect to internet. This embedded system must send sensor data to PC client.
I put a socket client using python on my PC. I put a socket server ( using C++ language on the embedded system because you can only use C++ ).
I can succesfully connect from my PC to the embedded system using the sockets and send and recieve whatever I want.
Now, the problem is I use local IP to connect to the system and both of them must be connected to the same Wifi router.
In the real application, I won't know where the embedded system is in the world. I need to get to it through internet, because it will be connectet to internet through 4g.
My question is, how can I connect to it through internet, if the embedded system is connected to internet using 4G?
Thank you
Realistically in typical situations, neither a PC nor an embedded device hanging off a 4g modem will likely have (or should be allowed) to have externally routable addresses.
What this practically means is that you need to bounce your traffic through a mutually visible relay in the cloud.
One very common way of doing that for IoT devices (which is basically to say, connected embedded devices) is to use MQTT. You'll find support in one form or another for most computing platforms with any sort of IP networking capability.
Of course there are many other schemes, too - you can do something with a RESTful API, or websockets (perhaps as an alternate mode of an MQTT broker), or various proprietary IoT solutions offered by the big cloud platforms.
It's also going to be really key that you wrap the traffic in SSL, so you'll need support for that in your embedded device, too. And you'll have to think about which CA certs you package, and what you do about time given its formal requirement as an input to SSL validation.
I think your problem is more easily solved if you reverse the roles of your embedded system and PC. If you are communicating to a device using IP protocols across cellular networks, it is much easier to have the device connect to a server on the PC rather than the other way around. Some networks/cellular modems do not allow server sockets and in any case, the IP address is usually dynamically allocated and therefore difficult to know. By having the device connect to a server, it "knows" the domain name (or IP address) and port to which it should make the connection. You just have to make sure that there is indeed a server program running at that host bound to some agreed upon port number. You can wake up the device to form the connection based on a number of criteria, e.g. time or amount of collected data, etc.
I am creating a colloabrative note-making app in python.
Here, one guy on computer running the app can create the server subseuqently the changes on the screen([color, pixel], where pixel=[x,y]) will be transmitted to others connected to the server.
I am using kivy for creating the app. My question is with respect to transmitting the data over the server.
I can create server using this:
import socket
ip_address=socket.gethostbyname(socket.gethostname())
execfile( "manage.py runserver "+ip_address+":8000" )
Now, how do others connect to the server and request the data(assuming the above code is correct). Also, how to send the data in django.
Well, Django is a framework that allows creating a site or API that is reachable through HTTP protocol. This has several consequences for you:
Server cannot send a message to client unless the client asks. HTTP is a "request-response" protocol. Client sends a request (for example, http://server.com/getUpdates?id=100500) and gets a response from server.
Creating clients that ask the server to give them updates all the time is a bad practice, probably leading to server DoS.
Although you can use WebSockets, using Django for such a task is really an overkill.
Summarizing, you need a reliable duplex channel for sending data in both directions. I'd start with TCP server, rather than HTTP. Fortunately, Python stdlib has a module you can start with - socketserver.
Additional reading
TCP
UDP (you will probably want this for broadcasting)
Berkeley sockets (a socket standard underlying socketserver module)
TCP vs. UDP
When deciding what protocol to use, following aspects should be considered:
TCP is reliable. Messages never disappear implicitly. If there was a network error, message will be resent. If there's no connection, explicit error will be raised. TCP uses several algorithms to fit into the network channel. It is an intelligent protocol.
UDP is unreliable. It possesses no feature TCP has. Packets can disappear, get reordered. But UDP messages are lightweight and in experienced hands they summon to life such systems as network action games and streaming video (lost and reordered messages aren't crucial here and TCP becomes too slow).
So I'd recommend to start with TCP. It's way more easier to get working fast and correct than UDP. Switch to UDP if you have some experience with TCP and there are a lot of people using you app and wanting to get the lowest latency possible.
I want to build a peer to peer chat engine that runs over the Internet. So far my code works on a local network but not further. This is due to the fact that listening on sockets using python sockets does not make them available outside of the LAN.
It is acceptable for IPs to be shared knowledge, ie it is ok for the other person to need to know my IP address (and a port on which I am listening) to connect to me.
How does one tell the router to open a socket to the outside world? Presumably this can be done as p2p software such as BitTorrent must do it for communication between clients.
As you have mentioned you have to open a specific port on the router and use that port for communication. As there are many router manufacturers each with a variety of models I suggest you to check the manual for the router you want to use.
for the code, you may check if your code works on LAN and then see if the router let's you white-list some ports. you may find many simple examples online.
this is a code i played sometime ago:
http://www.mediafire.com/download/vef4q4prkr7be2e/python.socket.zip
if you don't want users to mess up with ports and router settings and such, first alternative i can think of is this:
you setup an REST API, in one interface one is able to retrieve the messages providing (chatRoomName, FromTimestamp, ToTimestamp[,optionally chatRoomPassWord]) but this has nothing to do with sockets, you have to use simple HTTP requests(urllib/urllib2). Of course there might exist some workaround for this such as an always-white-listed port(like 80 for browsers, 22 for SSH) but you have to search for such exceptions.
note that ports up to 1024 require special privileges(admin/sudo) to be used.
p.s. in traditional implementation other party(client) have to know your (ip, port) duo to be able to connect to the you(server).
Important note:
I've asked this question already on ServerFault: https://serverfault.com/questions/349065/clustering-tcp-servers-so-can-send-data-to-all-clients, but I'd also like a programmers perspective on the problem.
I'm developing a real-time mobile app by setting up a TCP connection between the app and server backend. Each user can send messages to all other users.
(I'm making the TCP server in Python with Twisted, am creating my own 'protocol' for communication between the app/backend and hosting it on Amazon Web Services.)
Currently I'm trying to make the backend scalable (and reliable). As far as I can tell, the system could cope with more users by upgrading to a bigger server (which could become rather limiting), or by adding new servers in a cluster configuration - i.e. having several servers sitting behind a load balancer, probably with 1 database they all access.
I have sketched out the rough architecture of this:
However what if the Red user sends a message to all other connected users? Red's server has a TCP connection with Red, but not with Green.
I can think of a one way to deal with this problem:
Each server could have an open TCP (or SSL) connection with each other server. When one server wants to send a message to all users it simply passes this along it's connection to the other servers. A record could be kept in the database of which servers are online (and their IP address), and one of the servers could be a boss - i.e. decides if others are up and running, if not it could remove them from the database (if a server was up and lost it's connection to the boss it could check the database and see if it had been removed, and restart if it had - else it could assume the boss was down.)
Clearly this needs refinement but shows the general principle.
Alternatively I'm not sure if this is possible (- definitely seems like wishful thinking on my part):
Perhaps users could just connect to a box or router, and all servers could message all users through it?
If you know how to cluster TCP servers effectively, or a design pattern that provides a solution, or have any comments at all, then I would be very grateful. Thank you :-)
You need to decide (or if you already did this - to share these decisions with us) reliability requirements for your system: should all messages be sent to all users in any case (e.g. one or more servers crashed), can you tolerate sending the same message twice to the same user on server crash? Your system complexity depends directly on these decisions.
The simplest version is when a message is not delivered to all users on server crash. All your servers keep TCP connection to each other. One of them receives a message from a user and sends it to all other connected users (to this server) and to all other connected servers. Other servers send this message to all their users. To scale the system you just run additional server which connects to all existing servers.
Have a look how it is handled with IRC servers. They essentially can do this already. Everbody can send to everybody else, on all servers. Or just to single users, also on another server. And to groups, called "channels". It works best by routing amongst the servers.
It's not that hard, if you can make sure the servers know each other and can talk to each other.
On a side note: At 9/11, the most reliable internet news source was the IRC network. All the www sites were down because of bandwidth; it took them ages to even get a plain-text web page back up. During this time, IRC networks were able to provide near real-time, moderated news channels across the atlantic. You maybe could no longer log into a server on the other side, but at least the servers were able to keep up a server-to-server connection across.
An obvious choice is to use the DB as a clearinghouse for messages. You have to store incoming messages somewhere anyway, lest they be lost if a server suddenly crashes. Put incoming messages into the central database and have notification processes on the TCP servers grab the messages and send them to the correct users.
TCP server cannot be clustered, the snapshot you put here is a classic HTTP server example.
Since the device will send TCP connection to server, say, pure socket, there will be noway of establishing a load-balancing server.