Is GAE Channel API secure? And what is the underlying implementation? - python

I'm using GAE + Python to create an application that needs to send real-time updates of sensitive data to clients and I wanted to know if the App Engine Channel API is secure or not. Will using HTTPS be enough or do channels require their own security protocol?
Also, what is the underlying implementation of the App Engine Channel API? Websockets, SSE? It seems like it really only provides one way communication from server to client through the channel, and then has the client use a standard HTTP request to communicate with the server.

Connections to the channel API are made over HTTPS, regardless of how your page was loaded, so it's not possible to eavesdrop on the contents of a channel API connection. As long as you keep the channel key secret, then, your channel is a secure communications channel between your app and the client.
Channels are implemented using long polling (comet).

Because channels are long-term-alive connections between server and a client, channels are not allowed to use resource consuming security approaches in many cases due to performance consideration. As it is declared in the official manual, The server only receives update messages from clients via HTTP requests. And as far as I know, even Dropbox sends its long-term notify message via HTTP, using a very short notify only to tell whether there is something new.
Fortunately, there are two ways to ensure your security.
Only notify your client via the HTTP channel when some states change. After that, let the client decide whether a further request, which can be a secure communication, should be made. And this is the most common way channels are used.
Although this is not the way I personally recommended, you can encrypt your data yourself and put these encrypted data run on the insecure HTTP channel.

Related

Is there a ready-made function for http_wait in telethon?

I need to use http_wait link with telethon, are there already made functions in the library to use that specific method?
I need to receive messages as soon as they occur is large broadcast channels, now the updates come 5-20 seconds late
Clients using the Telegram API, such as Telethon, connect to the Telegram servers directly via a TCP socket. While connected, Telegram decides when and where to deliver the updates. Telegram's API doesn't really offer a way to "poll" for these updates.
If Telegram is delivering them slowly, it's probably to reduce load, or because the channel is too large, or because the client is not being actively used. In essence, it's not something the library can "fix".

Private messaging in tornado

I am the beginner to tornado(python based web server). I have to create an application which will have public chat rooms and private messaging between two users.so, I have been looking for a good tutorial about tornado to implement the same but what i found is we can just create the websockets and once we have connected to socket we can send message to server and we can open multiple tabs of browser to replicate multiple users. So all users can send messages to server and every other user and can see all those messages but i need to create private message chat between two users like whatsapp. So can i do the same with tornado ? Please help me out. Any help would be appreciable.
If you can form sockets, from client to the server then yes!
Sockets are just data streams. You will have to add chat room request data and authentication to the sockets so the server can direct each client to the appropriate chat 'room' (or drop the connection if authentication fails).
after that it's the same as what you have implemented already.
For secure chat, you'll need some form of encryption on top of all this - at least so that clients know they are talking to the correct server. From there it's adding encryption for clients to know they are talking to the right clients.
The final step would be to implement peer to peer capabilities after authenticating at the server.

Server to Server Websocket communication

Here is the architecture topology:
An IoT device that counts people and saves the data to its cloud platform. Data can be accessed via an API and more specific it requires to provide a webserver endpoint where it can push the data every minute or so. This a ready-made product that I cannot change the data transfer method.
A webserver on my side that receives and stores the data.
As I am new to WebSockets, I interpret the above configuration as a WebSocket server installed on my webserver and wait for the data to be received from the IoT server (client).
So I deployed a Linux server in digitalocean and enabled the websocket server to wait for the incoming connections. The code I used for the server is:
import asyncio
import websockets
async def echo(websocket, path):
async for message in websocket:
print(message)
start_server = websockets.serve(echo, "MYSERVERIP", 80)
asyncio.get_event_loop().run_until_complete(start_server)
asyncio.get_event_loop().run_forever()
All I need at this stage is to print all JSON packets that are pushed from the IoT server.
When I try to set the endpoint address in the IoT server, it refuses to accept ws://Myserver:80 and only accepts HTTP://Myserver:80. Obviously I don't have any HTTP server running on my server and therefore I am guessing the connection is refused from my server.
Also, the IoT API requires token X-Auth-token authentication. I am using the WebSockets python library but I didn't set up the authentication on my server. I left it null on both IoT server API and my server.
If I was to add a token authentication, what would be parameters or arguments required for the websocket server? I tried to search the websockets docs but with no luck.
This is not for production environment!! I am only trying to learn.
Any thoughts are welcome.
So these are the requirements:
An IoT device that counts people and saves the data to its cloud
platform. Data can be accessed via an API and more specific it
requires to provide a webserver endpoint where it can push the data
every minute or so.
A webserver on my side that receives and stores
the data.
They need data to be refresh every minute or so. In my humble opinion, websockets are neccesary only on real time.
That said, my proposed solution is to use a Message Broker instead. I think it's easier to handle than websockets directly, and you do not have to care about maintaining a live socket connection all the time (which is not efficient in terms of energy in IoT world).
In other words, use a Pub/Sub architecture instead. Your IoT devices publish data to the Message Broker (common one is RabbitMQ), and then you build a server that subscribes to the broker, consuming its data and stores it.
Now, every device connects to the cloud only when it has data available, this saves energy. The protocol may be MQTT or HTTP, MQTT is often used in the IoT world.
Related: Pub-sub messaging benefits

Communicating over a local server in python

I am creating a colloabrative note-making app in python.
Here, one guy on computer running the app can create the server subseuqently the changes on the screen([color, pixel], where pixel=[x,y]) will be transmitted to others connected to the server.
I am using kivy for creating the app. My question is with respect to transmitting the data over the server.
I can create server using this:
import socket
ip_address=socket.gethostbyname(socket.gethostname())
execfile( "manage.py runserver "+ip_address+":8000" )
Now, how do others connect to the server and request the data(assuming the above code is correct). Also, how to send the data in django.
Well, Django is a framework that allows creating a site or API that is reachable through HTTP protocol. This has several consequences for you:
Server cannot send a message to client unless the client asks. HTTP is a "request-response" protocol. Client sends a request (for example, http://server.com/getUpdates?id=100500) and gets a response from server.
Creating clients that ask the server to give them updates all the time is a bad practice, probably leading to server DoS.
Although you can use WebSockets, using Django for such a task is really an overkill.
Summarizing, you need a reliable duplex channel for sending data in both directions. I'd start with TCP server, rather than HTTP. Fortunately, Python stdlib has a module you can start with - socketserver.
Additional reading
TCP
UDP (you will probably want this for broadcasting)
Berkeley sockets (a socket standard underlying socketserver module)
TCP vs. UDP
When deciding what protocol to use, following aspects should be considered:
TCP is reliable. Messages never disappear implicitly. If there was a network error, message will be resent. If there's no connection, explicit error will be raised. TCP uses several algorithms to fit into the network channel. It is an intelligent protocol.
UDP is unreliable. It possesses no feature TCP has. Packets can disappear, get reordered. But UDP messages are lightweight and in experienced hands they summon to life such systems as network action games and streaming video (lost and reordered messages aren't crucial here and TCP becomes too slow).
So I'd recommend to start with TCP. It's way more easier to get working fast and correct than UDP. Switch to UDP if you have some experience with TCP and there are a lot of people using you app and wanting to get the lowest latency possible.

Clustering TCP servers, so can send data to all clients

Important note:
I've asked this question already on ServerFault: https://serverfault.com/questions/349065/clustering-tcp-servers-so-can-send-data-to-all-clients, but I'd also like a programmers perspective on the problem.
I'm developing a real-time mobile app by setting up a TCP connection between the app and server backend. Each user can send messages to all other users.
(I'm making the TCP server in Python with Twisted, am creating my own 'protocol' for communication between the app/backend and hosting it on Amazon Web Services.)
Currently I'm trying to make the backend scalable (and reliable). As far as I can tell, the system could cope with more users by upgrading to a bigger server (which could become rather limiting), or by adding new servers in a cluster configuration - i.e. having several servers sitting behind a load balancer, probably with 1 database they all access.
I have sketched out the rough architecture of this:
However what if the Red user sends a message to all other connected users? Red's server has a TCP connection with Red, but not with Green.
I can think of a one way to deal with this problem:
Each server could have an open TCP (or SSL) connection with each other server. When one server wants to send a message to all users it simply passes this along it's connection to the other servers. A record could be kept in the database of which servers are online (and their IP address), and one of the servers could be a boss - i.e. decides if others are up and running, if not it could remove them from the database (if a server was up and lost it's connection to the boss it could check the database and see if it had been removed, and restart if it had - else it could assume the boss was down.)
Clearly this needs refinement but shows the general principle.
Alternatively I'm not sure if this is possible (- definitely seems like wishful thinking on my part):
Perhaps users could just connect to a box or router, and all servers could message all users through it?
If you know how to cluster TCP servers effectively, or a design pattern that provides a solution, or have any comments at all, then I would be very grateful. Thank you :-)
You need to decide (or if you already did this - to share these decisions with us) reliability requirements for your system: should all messages be sent to all users in any case (e.g. one or more servers crashed), can you tolerate sending the same message twice to the same user on server crash? Your system complexity depends directly on these decisions.
The simplest version is when a message is not delivered to all users on server crash. All your servers keep TCP connection to each other. One of them receives a message from a user and sends it to all other connected users (to this server) and to all other connected servers. Other servers send this message to all their users. To scale the system you just run additional server which connects to all existing servers.
Have a look how it is handled with IRC servers. They essentially can do this already. Everbody can send to everybody else, on all servers. Or just to single users, also on another server. And to groups, called "channels". It works best by routing amongst the servers.
It's not that hard, if you can make sure the servers know each other and can talk to each other.
On a side note: At 9/11, the most reliable internet news source was the IRC network. All the www sites were down because of bandwidth; it took them ages to even get a plain-text web page back up. During this time, IRC networks were able to provide near real-time, moderated news channels across the atlantic. You maybe could no longer log into a server on the other side, but at least the servers were able to keep up a server-to-server connection across.
An obvious choice is to use the DB as a clearinghouse for messages. You have to store incoming messages somewhere anyway, lest they be lost if a server suddenly crashes. Put incoming messages into the central database and have notification processes on the TCP servers grab the messages and send them to the correct users.
TCP server cannot be clustered, the snapshot you put here is a classic HTTP server example.
Since the device will send TCP connection to server, say, pure socket, there will be noway of establishing a load-balancing server.

Categories