Asynchronous Socket connections in Python or Node - python

I am creating an application that basically has multiple connections to a third party Chat Streaming API(Socket based).
The way it works is - Every user has an account on my app and another account on the third party app. He gives me an access token for the third party chat app and I connect to the third party API to stream his chats. This happens for hundreds of users.
I need to create a socket connection pool for every user and run parallel threads. I am using a python library(for that API) and am able to achieve real time feeds for single users. How do I implement an asynchronous socket connection pool in Python or NodeJS? I have a Linux micro instance on EC2 and I need to run this application for 1000 users.
I am exploring Redis+Tornado to implement this. Are there any better alternatives?

This will be messy and also a couple of things to consider.
If you are going to use multiple threads remember that you can only run so many per CPU as the OS permits, rather go multiprocessing.
If you are going async with long polling processes it will prevent other clients from processing requests.
Solution
When your application absolutely needs to be real-time I would suggest websockets for server-client interaction.
Then from your clients request start a single process that listens\polls on your streaming API using multiprocessing in python. So you will essentially create a separate process for each client.
And now, to make your WebSocketHandler and Background API Streamer interact with each other you can use the Observer Pattern (https://en.wikipedia.org/wiki/Observer_pattern) to notify the WebSocket that you have received data from the API.
Make sure that you assign a unique ID to every client and make sure that you only post the data to the intended client when using websockets.
EDIT:
Web:
Also on your question regarding Tornado. It is a good lightweight framework for running a couple of users maybe 1000. But anything more than that I would suggest looking at Django as it will allow you to be more productive in producing code and also there are lots of tools out there that the community have developed over time.
Database:
Red.is is a good choice if you need a very fast no-sql db, also have a look at mongodb. If you require a multi-region DB I would suggest going with Cassandra or CouchDB due to the partitioned nodes. The image below might help you better decide which DB to use.

Related

Proper Celery Monitoring and File-Management with Web-Services

We are working on an Internet and Intranet platform, that serves client-requests over website applications.
There are heavy-weight computations on database entries and files. We want to update the state of those computations via push-notification to the client and make changes to files without the risk of race-conditions. The architecture is supposed to run on both, low- scaled one-server environments and high-scaled cluster environments.
So far, we are running a Django Webserver with Postgresql, the Python-Library Channels and RabbitMQ as Messagebroker.
Once a HTTP-Request from a client arrives in Django, we trigger the task via task.delay() and immediatly return the task_id to the client. The client then opens a websocket to another Django-route and hands over the task_ids he is interested in. Django then polls the state of the task via AsyncResult(task_id).state. Once the state changes, we read the results via AsyncResult(task_id).get and push the task_results to the client.
Here a similar sequence diagramm, from another project I found online.
Source(18.09.21)
Something that is not seen on the diagram, the channels_worker have to fetch the file they are working on from Django. A part of the result is not for the client, but to update the file. Django locks and updates the file localy as soon, as the client asks for and Django receives the task_results from celery (the changes only add attributes and will not be in conflict with each other).
My thoughts about this architecture are:
monitoring of the celery-events is bad so far.
It is only triggered by the client, which has to know about the tasks to begin with.
Django is not suited for monitoring
and polling is not efficient in general.
The file management seems fishy.
I would prefer a proper monitoring, where events are pushed to Django and the client. The client have to be able to consume the events at any time later.
I have some thoughts about solutions, but I would like to hear your opinion first. Later I can bring them in the discussion too.
Greetings
Python
Edit 1
From other sources I got helpful information regarding a good strategy.
Instead of Django "monitoring" the celery tasks, we can use a dedicated Websocket-Service, like FastAPI thand monitors task events and propagates them to the clients via websocket.
The client doesn't have to know about it's running tasks per se. Instead we can have ownership of tasks and the client only has to authenticate himself. The whole Security Blog will be implemented anyways and its supported by Celery.
For the file management, we should use a dedicated object storage like minio. This service can become subscriber to task-events related to files.
We all like Python, but we don't have to re-invent the wheel whenever we want a better monitoring or more control on the behavior of our systems.
That being said, I would recommend re-architecture the solution to decrease the complexity of your django application by exploring what native cloud solutions are offering in terms of micro-service architecture (API-Gateway), AWS SQS and SNS, computation, and storage options for your files.
Such an approach will carry out a lot of the monitoring, configurations, file management activities, and most importantly your monolithic application could scale without code changes or additional configurations.

Multi-client web-cam access and stream processing from python server in real-time

I am in the process of making a web application that essentially takes in some web-stream from the client via their browser, and in real-time, sends it to a python server (Flask probably) that processes the frames in real-time and sends a response to the user. Now the backend has to be capable of handling web-streams from multiple clients simultaneously.
I am trying to grasp the framework for this entire application. What I have in mind is the following:
The user accesses the web-cam via their browser (e.g using webcamJS), the frames are sent from the frontend to the back-end through a web-socket. The task here is to establish a seemless handshake between the multiple clients and their processing requests.
There is a need for concurrency if the processing is to be done in real-time, multiple threads of the same image-processing-algorithm need to be executed. My take is that I make use of the multiple threads for this purpose or is there a better way of doing this? Is this even a feasible approach as the image-processing-algorithm (trained model) takes some time to load up , so it has to be always initialized at the backend and not start from scratch at every request.
The response from image-processing-algorithm need to get back to the frontend and the process goes on.
What I really need help is in drawing out the complete framework of this implementation. Any suggestions on the modules/frameworks to use with some implementations would be greatly appreciated.
Thank you.
You can use Flask for your web server, Keras to process the videos.
The standard library multiprocessing module will also be helpful to treat multiple feeds at once.

add asynchronous capabilities to django

I have a mature, production django-tastypie server with an angular client service where I need to update the client with real-time data. I was thinking about websockets for the client.
my question is which strategy is best for the server side:
use some django plugin that handles asynchronism
raise a new tornado server for the sake of handling the async part (and then make it learn my django users/authentication model)
embed tornado inside django (like this)
what is your recommendation? or maybe something else I didn't think of?
I would suggest using this:
https://github.com/jrief/django-websocket-redis
I have used it in production and it works very well; the main benefit is that it is non-blocking and integrates with Django's auth system (you can message specific users, groups or all users via the websocket).
#Randi's suggestion to use Celery if you need to run async tasks is a very good one: Celery is excellent. Combine that with django-websocket-redis and you can perform long-running async tasks and give your users real-time updates as the task runs.
An alternative, but probably less appropriate solution would be to poll the server for changes/notifications from your angular client every 2-5 seconds. However, this is the "old school" approach and will potentially add A LOT of load to your server. This approach would save the overhead of managing extra services and would probably be better choice if you have a small number of users.

Making an asynchronous interface appear synchronous to mod_python users

I have a Python-driven web interface powered by Apache 2.2 with mod_python and Python 2.4. I need to make an asynchronous process appear synchronous to users of this web interface.
When users access one module on this website:
An external SOAP interface will be contacted with a unique identifier and will respond with a number N
The external interface will respond asynchronously by contacting a SOAP server on my machine between 1 and 10 times (the number N tells us how many responses we will receive)
I need to somehow aggregate these responses and pass them to the original module which will display the information back to the user. The goal is to make the process appear synchronous to the user.
What is the best way to handle this synchronization issue? Is this something Twisted would be well-suited for?
I am not restricting myself to Python for the solution, though it is preferred because everything else on the server is in Python. I prefer a solution that is both scalable and will take a minimal amount of programming time (though I understand that these attributes are somewhat at odds).
Maybe you can use Orbited to get ajax push with long-lived HTTP connections to your web clients. Orbited is based on Twisted, so I think it makes sense to look at if you already know Twisted. Have a look at this tutorial to get started.

Many producer, single consumer with python/mod_wsgi

I have a Pylons web application served by Apache (mod_wsgi, prefork). Because of Apache, there are multiple separate processes running my application code concurrently. Some of the non-critical tasks that the application does I want to defer for processing in background to improve "live" response times. So I'm thinking of task queue, many Apache processes adding tasks to this queue, a single separate Python process processing them one-by-one and removing from queue.
The queue should preferably be persisted to disk so queued unprocessed tasks are not lost because of power outage, server restart etc. The question is what would be a reasonable way to implement such queue?
As for the things I've tried: I started with simple SQLite database and single table in it for storing queue items. In load testing, when increasing level of concurrency, I started getting "database locked" errors, as expected. The quick'n'dirty fix was to replace SQLite with MySQL--it handles concurrency issues well but feels like an overkill for the simple thing I need to do. Queue-related DB operations also show up prominently in my profiling reports.
A message broker like Apache's ActiveMQ is an ideal solution here.
The pipeline could be following:
Application process that is responsible for handling HTTP requests generates replies quickly and sends low-priority, heavy tasks to AMQ queue.
One or more another processes are subscribed to consume AMQ queue and do what is intended to do with these heavy tasks.
The requirement of queue persistence is fulfilled out of the box since ActiveMQ stores messages that are not yet consumed in persistent storage. Furthermore it scales quite well since you're free to deploy multiple HTTP-apps, multiple consumer apps and AMQ itself on different machines each.
We use something like this in our project written in Python utilizing STOMP as underlying communication protocol.
A web server (any web server) is multi-producer, single-consumer process.
A simple solution is to build a wsgiref or Werkzeug backend server to handle your backend requests.
Since this "backend" server is build using WSGI technology, it's very, very similar to the front-end web server. Except. It doesn't produce HTML responses (JSON is usually simpler). Other than that, it's very straightforward.
You design RESTful transactions for this backend. You use all of the various WSGI features for URI parsing, authorization, authentication, etc. You -- generally -- don't need session management, since RESTful servers don't usually offer sessions.
If you get into serious scalability issues, you simply wrap your backend server in lighttpd or some other web engine to create a multi-threaded backend.

Categories