Architecturing an ML API in Django - python

I work on a project for our clients which is heavily ML based and is computationally intensive (as in complex and multi-level similarity scores, NLP, etc.) For the prototype, we delivered a Django RF where the API would have access to a database from the client and with every request at specific end-points it would literally do all the ML applications on the fly (in the backend).
Now that we are scaling and more user activity is taking place in the production, the app seems to lag a lot. A simple profiling shows that a single POST request could take upto 20 secs to respond. So no matter how much I optimize in terms of horizontal scaling, I can't get rid of the bottleneck of all the calculations happening with the API calls. I have a hunch that caching could be a kind of solution. But I am not sure. I can imagine a lot of 'theoretical' solutions but I don't want to reinvent the wheel (or shall I say, re-discover the wheel).
Are there specific design architectures for ML or computationally intensive REST API calls that I can refer to in redesigning my project?

Machine learning & natural language processing systems are often resource-hungry and in many cases there is not much one can do about it directly. Some operations simply take longer than others but this is actually not the main problem in your case. The main problem is that the user doesn't get any feedback while the backend does its job which is not a good user experience.
Therefore, it is not recommended to perform resource-heavy computation within the traditional HTTP request-response cycle. Instead of calling the ML logic within the API view and waiting for it to finish, consider setting up an asychronous task queue to perform the heavy lifting independently of the synchronous request-response cycle.
In the context of Django, the standard task queue implementation would be Celery. Setting it put will require some learning and additional infrastructure (e.g. a Redis instance and worker servers), but there is really no other way to not to break the user experience.
Once you have set up everything, you can then start an asynchronous task whenever your API endpoint receives a request and immediately inform the user that their request is being carried out via a normal view response. Once the ML task has finished and its results have been written to the database (using a Django model, of course), you can then notify the user (e.g. via mail or directly in the browser via WebSockets) to view the analysis results in a dedicated results view.

Related

How to implement an auto-scaling solution requiring synchronous subscription?

I have to process requests published in a Pub/ Sub Topic by the users of my "service" in a Python application, having a main loop.
Each instance of the service will only be able to process one request a time due to the limitation of the application. Burst of requests (~10 at the beginning, growing to ~ 10^6/7) will mix into mostly total idle time. The cold start time of the application is very high compared to the processing time for a single request.
My code will be a plug-in, which polls the subscription, calls methods in the application based on it and then saves data in Cloud Storage and Big Query.
I have read through the cloud documentation and it seems that Cloud Run is the right solution, together with the synchronous subscription API in Python. Cloud Functions I excluded, because it seems more be suited for asynchronous stuff.
However, I did not fully understand how the auto-scaling works. Basically it would have to do this based on the average processing time for each request and the length of the queue, considering the average start-up time of a container.
Unfortunately I did not find a tutorial or example for such a use-case, especially not really explaining how the auto-scaling happens in detail.
Does anyone have something like that or could one explain me here?

Is it convenient to use Airflow Architecture to orchestrate Complex Python Microservices?

I have a data product that receives realtime streams of vehicles data, process the information and then displays it through rest APIS. I am looking to rebuild the ETL side of the system in order to enhance reliability and architecture order. I am considering to use Apache Airflow, but have some doubts.
The python microservices I need to orchestrate are complex and have many different dependencies, hence a monolithic solution would be huge and tricky if implemented with python operators in Airflow. Do you see any convenient options of using Airflow for these needs? Might be a better solution to use Kafka or any other Messaging System?
Thanks
Airflow is definitely used to orchestrate ETL systems however if you require real time ETL, a message queue is probably better.
However you can Trigger DAGs Externally if it suits your use case. Now depending on how distributed you want, you can also have multiple instances running to parallelize and speed up your operations.
If you think batch processing will be helpful, you can have a pseudo realtime set up with a very fast schedule. This will ( in my opinion ) be easier to debug then a traditional message queue system.
Airflow gives you a lot of modularity which lets you break down a complex architecture and debug components easily compared to a messaging queue. It also has provisions for auto requeueing and also has named queues which you can use in case you need to send large data loads to a separate processing queue on a machine with more resources.
I have worked on a similar use case as yours where we needed to process realtime data and display it via APIs and we used a mixture of Airflow and messaging queues. In my experience it was always a lot easier to figure out where and why something went wrong with Airflow due to the Airflow UI which makes getting a high level overview incredibly efficient.

Celery django explanation

I have been learning about django recently and have stumbled upon celery. I don't seem to understand what it does. I've been to their site to no avail. Can anyone explain to me the concept and it's real world applications (in simple terms)?
Celery is an "asynchronous task queue/job queue based on distributed message passing". It is just a task queue, or something that one puts tasks into to do as soon as possible. You have a celery instance that you integrate directly with your django or python app- this is what you use to talk to celery. Then, you can configure celery to have 'workers' that perform the tasks you give them. The whole point is to be able to do tasks that don't fit within the normal request/response cycle very well that django handles so well.
What kinds of tasks are these? Well, as said before, they don't fit into the normal request/response cycle. The best example I can think of is emails- if you're building a web app and you want to keep your users, you need to keep them engaged and coming back, and a good way to do that is by sending emails. You send them once a week or once a day and they can maybe configure when to send. This would fit horribly within the request/response cycle, but it's perfect for something like Celery.
Other examples are long-running jobs with lots of computation. While you would typically use something like Hadoop for really big computations, you can schedule some queries with Celery. You could also use it to schedule builds if you're doing something like Travis. The uses go on and on, but you probably get the point.

Coordinating distributed Python processes using queuing or REST web service

Server A has a process that exports n database tables as flat files. Server B contains a utility that loads the flat files into a DW appliance database.
A process runs on server A that exports and compresses about 50-75 tables. Each time a table is exported and a file produced, a .flag file is also generated.
Server B has a bash process that repeatedly checks for each .flag file produced by server A. It does this by connecting to A and checking for the existence of a file. If the flag file exists, Server B will scp the file from Server A, uncompress it, and load it into an analytics database. If the file doesn't yet exist, it will sleep for n seconds and try again. This process is repeated for each table/file that Server B expects to be found on Server A. The process executes serially, processing a single file at a time.
Additionally: The process that runs on Server A cannot 'push' the file to Server B. Because of file-size and geographic concerns, Server A cannot load the flat file into the DW Appliance.
I find this process to be cumbersome and just so happens to be up for a rewrite/revamp. I'm proposing a messaging-based solution. I initially thought this would be a good candidate for RabbitMQ (or the like) where
Server A would write a file, compress it and then produce a message for a queue.
Server B would subscribe to the queue and would process files named in the message body.
I feel that a messaging-based approach would not only save time as it would eliminate the check-wait-repeat cycle for each table, but also permit us to run processes in parallel (as there are no dependencies).
I showed my team a proof-of-concept using RabbitMQ and they were all receptive to using messaging. A number of them quickly identified other opportunities where we would benefit from message-based processing. One such area that we would benefit from implementing messaging would be to populate our DW dimensions in real-time rather then through batch.
It then occurred to me that a MQ-based solution might be overkill given the low volume (50-75 tasks). This might be overkill given our operations team would have to install RabbitMQ (and its dependencies, including Erlang), and it would introduce new administration headaches.
I then realized this could be made more simple with a REST-based solution. Server A could produce a file and then make a HTTP call to a simple (web.py) web service on Server B. Server B could then initiate the transfer-and-load process based on the URL that is called. Given the time that it takes to transfer, uncompress, and load each file, I would likely use Python's multiprocessing to create a subprocess that loads each file.
I'm thinking that the REST-based solution is idea given the fact that it's simpler. In my opinion, using an MQ would be more appropriate for higher-volume tasks but we're only talking (for now) 50-75 operations with potentially more to come.
Would REST-based be a good solution given my requirements and volume? Are there other frameworks or OSS products that already do this? I'm looking to add messaging without creating other administration and development headaches.
Message brokers such as Rabbit contain practical solutions for a number of problems:
multiple producers and consumers are supported without risk of duplication of messages
atomicity and unit-of-work logic provide transactional integrity, preventing duplication and loss of messages in the event of failure
horizontal scaling--most mature brokers can be clustered so that a single queue exists on multiple machines
no-rendezvous messaging--it is not necessary for sender and receiver to be running at the same time, so one can be brought down for maintenance without affecting the other
preservation of FIFO order
Depending on the particular web service platform you are considering, you may find that you need some of these features and must implement them yourself if not using a broker. The web service protocols and formats such as HTTP, SOAP, JSON, etc. do not solve these problems for you.
In my previous job the project management passed on using message brokers early on, but later the team ended up implementing quick-and-dirty logic meant to solve some of the same issues as above in our web service architecture. We had less time to provide business value because we were fixing so many concurrency and error-recovery issues.
So while a message broker may seem on its face like a heavyweight solution, and may actually be more than you need right now, it does have a lot of benefits that you may need later without yet realizing it.
As wberry alluded to, a REST or web-hook based solution can be functional but will not be very tolerant to failure. Paying the operations price up front for messaging will pay long term dividends as you will find additional problems which are a natural fit for the messaging model.
Regarding other OSS options; If you are considering stream based processing in addition to this specific use case, I would recommend taking a look at Apache Kafka. Kafka provides similar messaging semantics to RabbitMQ, but is tightly focused on processing message streams (not to mention that is has been battle tested in production at LinkedIn).

Backend processing for Django

I'm working on a turn-based web game that will perform all world updates (player orders, physics, scripted events, etc.) on the server. For now, I could simply update the world in a web request callback. Unfortunately, that naive approach is not at all scalable. I don't want to bog down my web server when I start running many concurrent games.
So what is the best way to separate the load from the web server, ideally in a way that could even be run on a separate machine?
A simple python module with infinite loop?
A distributed task in something like Celery?
Some sort of cross-platform Cron scheduler?
Some other fancy Django feature or third-party library that I don't know about?
I also want to minimize code duplication by using the same model layer. That probably means my service would need access to the Django model code, so that definitely determines how I architect the service.
I think Celery, which you mention in your question, is the way to go here. It will interface nicely with the rest of your setup, support your eventual aim of separating out the systems, and is compatible with Django.
I'd just write the backend to just use the Django database interface (look at the setup code in your manage.py), spawn it as its own process, and interface to it with Protocol Buffers. That route should move to a separate machine with little work. MPI may be an option, too.
Pipes, FIFOs, and most other IPC requires both processes to be on the same box.
Though I have to point out a flaw in your premise:
Unfortunately, that naive approach is not at all scalable. I don't want to bog down my web server when I start running many concurrent games.
If you run concurrent games, so long as you keep all the parts for a given game on the same server, this is a non-issue, unless there's a common resource needed by all games. Then the real issue becomes load balancing across the servers.

Categories