I am creating a colloabrative note-making app in python.
Here, one guy on computer running the app can create the server subseuqently the changes on the screen([color, pixel], where pixel=[x,y]) will be transmitted to others connected to the server.
I am using kivy for creating the app. My question is with respect to transmitting the data over the server.
I can create server using this:
import socket
ip_address=socket.gethostbyname(socket.gethostname())
execfile( "manage.py runserver "+ip_address+":8000" )
Now, how do others connect to the server and request the data(assuming the above code is correct). Also, how to send the data in django.
Well, Django is a framework that allows creating a site or API that is reachable through HTTP protocol. This has several consequences for you:
Server cannot send a message to client unless the client asks. HTTP is a "request-response" protocol. Client sends a request (for example, http://server.com/getUpdates?id=100500) and gets a response from server.
Creating clients that ask the server to give them updates all the time is a bad practice, probably leading to server DoS.
Although you can use WebSockets, using Django for such a task is really an overkill.
Summarizing, you need a reliable duplex channel for sending data in both directions. I'd start with TCP server, rather than HTTP. Fortunately, Python stdlib has a module you can start with - socketserver.
Additional reading
TCP
UDP (you will probably want this for broadcasting)
Berkeley sockets (a socket standard underlying socketserver module)
TCP vs. UDP
When deciding what protocol to use, following aspects should be considered:
TCP is reliable. Messages never disappear implicitly. If there was a network error, message will be resent. If there's no connection, explicit error will be raised. TCP uses several algorithms to fit into the network channel. It is an intelligent protocol.
UDP is unreliable. It possesses no feature TCP has. Packets can disappear, get reordered. But UDP messages are lightweight and in experienced hands they summon to life such systems as network action games and streaming video (lost and reordered messages aren't crucial here and TCP becomes too slow).
So I'd recommend to start with TCP. It's way more easier to get working fast and correct than UDP. Switch to UDP if you have some experience with TCP and there are a lot of people using you app and wanting to get the lowest latency possible.
Related
I have an application, foo which takes in data, does stuff to it, and then publishes the new treated data over AMQ for another downstream application to grab. Until this point, foo has always gotten its data by connecting to another AMQ server which another script is publishing packetized data to (a lot of handwaving here, but the specifics don't really matter).
Recently a change has been made, and foo needs to be able to grab its data from a UDP socket. Is AMQ able to connect to this socket and receive/listen to the data being transmitted over it? From my understanding, AMQ uses TCP to establish connection to the client, and some initial research points me to this UDP Transport documentation from Apache, but not much else.
Alternatively, I could develop a rough UDP socket listener in Python, and then publish those messages to AMQ for foo to grab, but it would be optimal to have it all included in foo itself.
Not necessarily looking for an exhaustive solution here; quick and dirty would be enough to get me started.
Thanks!
ActiveMQ itself is a broker and therefore doesn't connect to sockets and listen for messages. It is the job of a client to connect to the broker and send and/or receive messages.
The UDP transport documentation is just theoretical as far as I know. It is technically possible to use UDP as the base of a traditional messaging protcol, but I've never actually seen it done since UDP is unreliable. The documentation even says, "Note that by default UDP is not reliable; datagrams can be lost so you should add a reliability layer to ensure the JMS contract can be implemented on a non-reliable transport." Adding a "reliability layer" is impractical when TCP can simply be used instead. All of the protocols which ActiveMQ supports (i.e. AMQP, STOMP, MQTT, OpenWire) fundamentally require a reliable network transport.
I definitely think you'll need some kind of intermediary process to read the data from the UDP socket and push it to the broker.
Key points:
I need to send roughly ~100 float numbers every 1-30 seconds from one machine to another.
The first machine is catching those values through sensors connected to it.
The second machine is listening for them, passing them to an http server (nginx), a telegram bot and another program sending emails with alerts.
How would you do this and why?
Please be accurate. It's the first time I work with sockets and with python, but I'm confident I can do this. Just give me crucial details, lighten me up!
Some small portion (a few rows) of the core would be appreciated if you think it's a delicate part, but the main goal of my question is to see the big picture.
Main thing here is to decide on a connection design and to choose protocol. I.e. will you have a persistent connection to your server or connect each time when new data is ready to it.
Then will you use HTTP POST or Web Sockets or ordinary sockets. Will you rely exclusively on nginx or your data catcher will be another serving service.
This would be a most secure way, if other people will be connecting to nginx to view sites etc.
Write or use another server to run on another port. For example, another nginx process just for that. Then use SSL (i.e. HTTPS) with basic authentication to prevent anyone else from abusing the connection.
Then on client side, make a packet every x seconds of all data (pickle.dumps() or json or something), then connect to your port with your credentials and pass the packet.
Python script may wait for it there.
Or you write a socket server from scratch in Python (not extra hard) to wait for your packets.
The caveat here is that you have to implement your protocol and security. But you gain some other benefits. Much more easier to maintain persistent connection if you desire or need to. I don't think it is necessary though and it can become bulky to code break recovery.
No, just wait on some port for a connection. Client must clearly identify itself (else you instantly drop the connection), it must prove that it talks your protocol and then send the data.
Use SSL sockets to do it so that you don't have to implement encryption yourself to preserve authentication data. You may even rely only upon in advance built keys for security and then pass only data.
Do not worry about the speed. Sockets are handled by OS and if you are on Unix-like system you may connect as many times you want in as little time interval you need. Nothing short of DoS attack won't inpact it much.
If on Windows, better use some finished server because Windows sometimes do not release a socket on time so you will be forced to wait or do some hackery to avoid this unfortunate behaviour (non blocking sockets and reuse addr and then some flo control will be needed).
As far as your data is small you don't have to worry much about the server protocol. I would use HTTPS myself, but I would write myown light-weight server in Python or modify and run one of examples from internet. That's me though.
The simplest thing that could possibly work would be to take your N floats, convert them to a binary message using struct.pack(), and then send them via a UDP socket to the target machine (if it's on a single LAN you could even use UDP multicast, then multiple receivers could get the data if needed). You can safely send a maximum of 60 to 170 double-precision floats in a single UDP datagram (depending on your network).
This requires no application protocol, is easily debugged at the network level using Wireshark, is efficient, and makes it trivial to implement other publishers or subscribers in any language.
I would like to make a multi-threading UDP server in Python.
The purpose is to be able to connect several clients to the server (not sockets connections but username and password), act with each of them and do some actions on the server. All at the same time.
I am a little confuse with all the different type of threading and I don't know what to use.
To be clearer this is exactly what I want to do at the same time :
Wait for clients to send data for the first time and register their ip in a database
Act with "connected" clients by waiting for them to send datagrams and respond to them
Be able to act with the server. For exemple, change a client's password in my database
I would have a look at a framework that is good at handling asynchronous io. The idea is to not have a thread per socket and block until you receive data, but instead let one thread handle many sockets at once. This scales well if you want your server to handle many clients.
For example:
Gevent - "a coroutine-based Python networking library", example
Twisted - "an event-driven networking engine", example
Eventlet - "a concurrent networking library", example (TCP, but it uses a patched socket so you can also refer to the Python wiki page about UDP Communication)
I have a big problem, and I am having a hard time solving it. I have a custom made game controller, which outputs some data from it's sensors via serial communication and is connected to PC via serial port. I do the callculation of the current controller position in a Matlab script. I am building a web application that will display the data (position) of the device in a web browser, but can't seem to work out, how to connect my device to the browser. Matlab script sends all the position data to a UDP port with a sampling fequency of 100HZ (100 samples per second). I need to make a persistent connection between a web browser and my matlab script so I will be able to display the data. I am thinking about using web sockets API. but it does not "speak" UDP. So my idea was to somehow read the data with from UDP with a custom Python server and then create a websocket on that Python server and send data received via UDP port to web browser. Oh, and it would be nice if I could communicate in both directions. Will this work? Any ideas on how to do it? How is this usually done, I mean how can one connect let's say some temperature sensor to web browser to display data in real time?
Any answer will be gladly appreciated.
Thanks,
Leon
Note that although the WebSockets protocol is built on TCP sockets, the WebSockets protocol is not raw TCP sockets. A WebSockets connection has an HTTP friendly handshake (with some CORS functionality built-in). WebSockets are also message based (rather than streaming like TCP) so each message has a couple of bytes of framing headers.
You might look at websockify (disclaimer: I made websockify). Websockify is a python server that bridges/proxies between WebSockets and plain TCP sockets. I don't think it would be particularly difficult to adapt it to handle UDP sockets on the backend.
WebSockify (designed to be used together with the included include/websock.js front-end library) supports binary data even over the older Hixie versions of the protocol. This allows it to work with iOS (iPhone,iPad) devices which still only support the older version of the protocol.
Important note:
I've asked this question already on ServerFault: https://serverfault.com/questions/349065/clustering-tcp-servers-so-can-send-data-to-all-clients, but I'd also like a programmers perspective on the problem.
I'm developing a real-time mobile app by setting up a TCP connection between the app and server backend. Each user can send messages to all other users.
(I'm making the TCP server in Python with Twisted, am creating my own 'protocol' for communication between the app/backend and hosting it on Amazon Web Services.)
Currently I'm trying to make the backend scalable (and reliable). As far as I can tell, the system could cope with more users by upgrading to a bigger server (which could become rather limiting), or by adding new servers in a cluster configuration - i.e. having several servers sitting behind a load balancer, probably with 1 database they all access.
I have sketched out the rough architecture of this:
However what if the Red user sends a message to all other connected users? Red's server has a TCP connection with Red, but not with Green.
I can think of a one way to deal with this problem:
Each server could have an open TCP (or SSL) connection with each other server. When one server wants to send a message to all users it simply passes this along it's connection to the other servers. A record could be kept in the database of which servers are online (and their IP address), and one of the servers could be a boss - i.e. decides if others are up and running, if not it could remove them from the database (if a server was up and lost it's connection to the boss it could check the database and see if it had been removed, and restart if it had - else it could assume the boss was down.)
Clearly this needs refinement but shows the general principle.
Alternatively I'm not sure if this is possible (- definitely seems like wishful thinking on my part):
Perhaps users could just connect to a box or router, and all servers could message all users through it?
If you know how to cluster TCP servers effectively, or a design pattern that provides a solution, or have any comments at all, then I would be very grateful. Thank you :-)
You need to decide (or if you already did this - to share these decisions with us) reliability requirements for your system: should all messages be sent to all users in any case (e.g. one or more servers crashed), can you tolerate sending the same message twice to the same user on server crash? Your system complexity depends directly on these decisions.
The simplest version is when a message is not delivered to all users on server crash. All your servers keep TCP connection to each other. One of them receives a message from a user and sends it to all other connected users (to this server) and to all other connected servers. Other servers send this message to all their users. To scale the system you just run additional server which connects to all existing servers.
Have a look how it is handled with IRC servers. They essentially can do this already. Everbody can send to everybody else, on all servers. Or just to single users, also on another server. And to groups, called "channels". It works best by routing amongst the servers.
It's not that hard, if you can make sure the servers know each other and can talk to each other.
On a side note: At 9/11, the most reliable internet news source was the IRC network. All the www sites were down because of bandwidth; it took them ages to even get a plain-text web page back up. During this time, IRC networks were able to provide near real-time, moderated news channels across the atlantic. You maybe could no longer log into a server on the other side, but at least the servers were able to keep up a server-to-server connection across.
An obvious choice is to use the DB as a clearinghouse for messages. You have to store incoming messages somewhere anyway, lest they be lost if a server suddenly crashes. Put incoming messages into the central database and have notification processes on the TCP servers grab the messages and send them to the correct users.
TCP server cannot be clustered, the snapshot you put here is a classic HTTP server example.
Since the device will send TCP connection to server, say, pure socket, there will be noway of establishing a load-balancing server.