Websockets behind nginx triggered by zeromq? - python

I'm trying to design a system that will process large amounts of data and send updates to the client about its progress. I'd like to use nginx (which, thankfully, just started supporting websockets) and uwsgi for the web server, and I'm passing messages through the system with zeromq. Ideally the solution could be written in Python, but I'm also open to a Nodejs or even a Go solution.
Here is the flow that I'd like to achieve:
Client visits a website and requests that a large amount of data be processed.
The server farms out the processing to another process/server [the worker] via zeromq, and replies to the client request explaining that processing has begun, including information about how to set up a websocket with the server.
The client sets up the websocket connection and waits for updates.
When the processing is done, the worker sends a "processing done!" message to the websocket process via zeromq, and the websocket process pushes the message down to the client.
Is what I describe possible? I guess I was thinking that I could run uwsgi in emperor mode so that it can handle one process (port) for the webserver and another for the websocket process. I'm just not sure if I can find a way to both receive zeromq message and manage websocket connections all from the same process. Maybe I have to initiate the final websocket push from the worker?
Any help/correct-direction-pointing/potential-solutions would be much appreciated. Any sample or snippet of an nginx config file with websockets properly routed would be appreciated as well.
Thanks!

Sure, that should be possible. You might want to look at zerogw.

Related

Long polling scalable architecture in tornado/cyclone

I want to implement long polling in python using cyclone or tornado with regards to scalability of service from beginning. Clients might connect for hours to this service. My concept:
Client HTTP requests will be processed by multiple tornado/cyclone handler threads behind NGINX proxy (serving as load balancer). There will be multiple data queues for requests: one for all unprocessed requests from all clients and rest of queues containing responses specific to each connected client, previously generated by worker processes. When requests are delivered to tornado/cyclone handler threads, request data will be sent for processing to worker queue and then processed by workers (which connect to database etc.). Meanwhile tornado/cyclone handler thread will look into client-specific queue and sends response with data back to client (if there is some waiting in queue). Please see the diagram.
Simple diagram: https://i.stack.imgur.com/9ZxcA.png
I am considering queue system because some requests might be pretty heavy on database and some requests might create notifications and messages for other clients. Is this a way to go towards scalable server or is it just overkill?
After doing some research I have decided to go with tornado websockets connected to zeroMQ. Inspired by this answer: Scaling WebSockets with a Message Queue.

Server Push with SocketIO from Celery Task

I have a flask application within which I have many long running asynchronous tasks (~hours). It's important that the state of these tasks is communicated with the client.
I use celery to manage the background task queue, and I'm currently trying to broadcast updates to the client from each background thread via socketIO. Is this possible? Is there a better suited strategy to achieving what I would like?
You did not say, but I assume you plan on using Flask-SocketIO to handle the server-side SocketIO and not the official Node.js server, correct?
What you want to do can be done, but with the current version of Flask-SocketIO, the problem is that the process that hosts the Flask and Flask-SocketIO server owns the socket connections with the clients, so it is the only process that can communicate with them. At this time, Flask-SocketIO does not offer any help in sending data to clients from other processes such as Celery workers, this part you have to implement yourself. Specifically for Celery, you can have your long running tasks expose progress information that the server process can pick up and send to the clients.
I am currently working on improvements to Flask-SocketIO that will enable any process to send messages to connected clients using a Redis pub/sub backend for communication to the Flask-SocketIO server. Once this work is completed you will be able to write data to any client transparently from your Celery process.
You also ask if there is another alternative. You should also consider that the client can poll the server for status. If the updates do not need to be very frequent, then this is an option that is going to be much easier to implement. The client asks the server for status for a given task, and the server in turn asks the Celery task. I showed this approach in my Flask+Celery blog article.
I was able to solve this by creating and endpoint on the Flask server. See my answer here for details

Scaling a decoupled realtime server alongside a standard webserver

Say I have a typical web server that serves standard HTML pages to clients, and a websocket server running alongside it used for realtime updates (chat, notifications, etc.).
My general workflow is when something occurs on the main server that triggers the need for a realtime message, the main server sends that message to the realtime server (via a message queue) and the realtime server distributes it to any related connection.
My concern is, if I want to scale things up a bit, and add another realtime server, it seems my only options are:
Have the main server keep track of which realtime server the client
is connected to. When that client receives a notification/chat
message, the main server forwards that message along to only the
realtime server the client is connected to. The downside here is
code complexity, as the main server has to do some extra book
keeping.
Or instead have the main server simply pass that message
along to every realtime server; only the server the client is
connected to would actually do anything with it. This would result
in a number of wasted messages being passed around.
Am I missing another option here? I'm just trying to make sure I don't go too far down one of these paths and realize I'm doing things totally wrong.
If the scenario is
a) The main web server raises a message upon an action (let's say a record is inserted)
b ) He notifies the appropriate real-time server
you could decouple these two steps by using an intermediate pub/sub architecture that forwards the messages to the indended recipient.
An implementation would be
1) You have a redis pub-sub channel where upon a client connecting to a real-time socket, you start listening in that channel
2) When the main app wants to notify a user via the real-time server, it pushes to the channel a message, the real-time server get's it and forwards it to the intended user.
This way, you decouple the realtime notification from the main app and you don't have to keep track of where the user is.
The problem you are describing is the common "message backplane" used for example in SignalR, also related to the "fanout message exchange" in message architectures. When having a backplane or doing fanout, every message is forwarded to every message node server, so clients can connect to any server and get the message. This approach is a reasonable pain when you have to support both long polling and websockets. However, as you noticed, it is a waste of traffic and resources.
You need to use a message infrastructure with intelligent routing, like RabbitMQ. Take a look to topic and header exchange : https://www.rabbitmq.com/tutorials/amqp-concepts.html
How Topic Exchanges Route Messages
RabbitMQ for Windows: Exchange Types
There are tons of different queuing frameworks. Pick the one you like, but ensure you can have more exchange modes than just direct or fanout ;) At the end, a WebSocket is just and endpoint to connect to a message infrastructure. So if you want to scale out, it boils down to the backend you have :)
For just a few realtime servers, you could conceivably just keep a list of them in the main server and just go through them round-robin.
Another approach is to use a load balancer.
Basically, you'll have one dedicated node to receive the requests from the main server, and then have that load-balancer node take care of choosing which websocket/realtime server to forward the request to.
Of course, this just shifts the code complexity from the main server to a new component, but conceptually I think it's better and more decoupled.
Changed the answer because a reply indicated that the "main" and "realtime" servers are alraady load-balanced clusters and not individual hosts.
The central scalability question seems to be:
My general workflow is when something occurs on the main server that triggers the need for a realtime message, the main server sends that message to the realtime server (via a message queue) and the realtime server distributes it to any related connection.
Emphasis on the word "related". Assume you have 10 "main" servers and 50 "realtime" servers, and an event occurs on main server #5: which of the websockets would be considered related to this event?
Worst case is that any event on any "main" server would need to propagate to all websockets. That's a O(N^2) complexity, which counts as a severe scalability impairment.
This O(N^2) complexity can only be prevented if you can group the related connections in groups that don't grow with the cluster size or total nr. of connections. Grouping requires state memory to store to which group(s) does a connection belong.
Remember that there's 3 ways to store state:
global memory (memcached / redis / DB, ...)
sticky routing (load balancer configuration)
client memory (cookies, browser local storage, link/redirect URLs)
Where option 3 counts as the most scalable one because it omits a central state storage.
For passing the messages from "main" to the "realtime" servers, that traffic should by definition be much smaller than the traffic towards the clients. There's also efficient frameworks to push pub/sub traffic.

Which web servers are compatible with gevent and how do the two relate?

I'm looking to start a web project using Flask and its SocketIO plugin, which depends on gevent (something something greenlets), but I don't understand how gevent relates to the webserver. Does using gevent restrict my server choice at all? How does it relate to the different levels of web servers that we have in python (e.g. Nginx/Apache, Gunicorn)?
Thanks for the insight.
First, lets clarify what we are talking about:
gevent is a library to allow the programming of event loops easily. It is a way to immediately return responses without "blocking" the requester.
socket.io is a javascript library create clients that can maintain permanent connections to servers, which send events. Then, the library can react to these events.
greenlet think of this a thread. A way to launch multiple workers that do some tasks.
A highly simplified overview of the entire process follows:
Imagine you are creating a chat client.
You need a way to notify the user's screens when anyone types a message. For this to happen, you need someway to tell all the users when a new message is there to be displayed. That's what socket.io does. You can think of it like a radio that is tuned to a particular frequency. Whenever someone transmits on this frequency, the code does something. In the case of the chat program, it adds the message to the chat box window.
Of course, if you have a radio tuned to a frequency (your client), then you need a radio station/dj to transmit on this frequency. Here is where your flask code comes in. It will create "rooms" and then transmit messages. The clients listen for these messages.
You can also write the server-side ("radio station") code in socket.io using node, but that is out of scope here.
The problem here is that traditionally - a web server works like this:
A user types an address into a browser, and hits enter (or go).
The browser reads the web address, and then using the DNS system, finds the IP address of the server.
It creates a connection to the server, and then sends a request.
The webserver accepts the request.
It does some work, or launches some process (depending on the type of request).
It prepares (or receives) a response from the process.
It sends the response to the client.
It closes the connection.
Between 3 and 8, the client (the browser) is waiting for a response - it is blocked from doing anything else. So if there is a problem somewhere, like say, some server side script is taking too long to process the request, the browser stays stuck on the white page with the loading icon spinning. It can't do anything until the entire process completes. This is just how the web was designed to work.
This kind of 'blocking' architecture works well for 1-to-1 communication. However, for multiple people to keep updated, this blocking doesn't work.
The event libraries (gevent) help with this because they accept and will not block the client; they immediately send a response and when the process is complete.
Your application, however, still needs to notify the client. However, as the connection is closed - you don't have a way to contact the client back.
In order to notify the client and to make sure the client doesn't need to "refresh", a permanent connection should be open - that's what socket.io does. It opens a permanent connection, and is always listening for messages.
So work request comes in from one end - is accepted.
The work is executed and a response is generated by something else (it could be a the same program or another program).
Then, a notification is sent "hey, I'm done with your request - here is the response".
The person from step 1, listens for this message and then does something.
Underneath is all is WebSocket a new full-duplex protocol that enables all this radio/dj functionality.
Things common between WebSockets and HTTP:
Work on the same port (80)
WebSocket requests start off as HTTP requests for the handshake (an upgrade header), but then shift over to the WebSocket protocol - at which point the connection is handed off to a websocket-compatible server.
All your traditional web server has to do is listen for this handshake request, acknowledge it, and then pass the request on to a websocket-compatible server - just like any other normal proxy request.
For Apache, you can use mod_proxy_wstunnel
For nginx versions 1.3+ have websocket support built-in

Sync data with Local Computer Architecture

The scenario is
I have multiple local computers running a python application. These are on separate networks waiting for data to be sent to them from a web server. These computers are on networks without a static IP and generally behind firewall and proxy.
On the other hand I have web server which gets updates from the user through a form and send the update to the correct local computer.
Question
What options do I have to enable this. Currently I am sending csv files over ftp to achieve this but this is not real time.
The application is built on python and using django for the web part.
Appreciate your help
Use a REST API. Then you can post information to your Django app over HTTP, using an authentication key if necessary.
http://www.django-rest-framework.org/ should help you get started quickly
Sounds like you need a message queue.
You would run a separate broker server which is sent tasks by your web app. This could be on the same machine. On your two local machines you would run queue workers which connect to the broker to receive tasks (so no inbound connection required), then notify the broker in real time when they are complete.
Examples are RabbitMQ and Oracle Tuxedo. What you choose will depend on your platform & software.

Categories