Webservices vs. Message Queues? - python

What are the implications of using webservices or message queues in an application?
In my case, I need to connect a Django web application with a python application, and I need two way communication between these applications. Sometimes the web app sends request to the python app to activate a few hardware devices, and sometimes the python app needs to be generally queried to obtain data.
The issue my entire application depends on instantaneous data received or sent to the python core application, so I cannot afford to waste system resources on querying everytime. I need to use something like a listener/receiver to send/receive data, without manually triggering a query every few seconds.
I am using Django for web application framework and Python for my core application.
I already have ZMQ being used internally for multiagent communication platform. If it is message queue, all I need to do is just connect to it, and send and receive data.
If it is a webservice, I need to freshly integrate the webservices. Again, what is the preferred method to create a webservice using Python?

Related

Using GRPC for SDK & microservice API with Python

I am designing a micro-service based system that will be accessed by a python SDK, from an application.
The SDK will be used to access machine learning models hosted as micro-services in a backend system.
I understands that GRPC relies on protobuf, which is lightweight and support streams. We do need to send data vectors to the hosted models, so it is appealing to use protobuf.
The question is more about the GRPC, as I understand it is using HTTP/2.
GRPC seems useful for accessing an API hosted on the backend, however there is also a need to keep an open live connection to receive general update events from the server.
For example I would like to have a “context” where incoming events can arrive, mainly for asynchronous communication, for example after several model invocations are performed, the system might use aggregated data on the backend and send an event when a prediction crosses some threshold.
Hypothetical usage example:
ctx = BackendSystemSDK.connect(APIGatewayHost,port)
ctx.registerCallback(SomeAlertCallbackFunc)
...
ctx.applyModel(‘model1’,SomeVector1)
ctx.applyModel(‘model2’,SomeVector2)
...
When program terminates
ctx.close()
I played with GRPC and tested a client and server successfully in python.
However I am not sure about implementing the “context” open connection idea with GRPC.
It is more like a pub/sub concept, however I could not find any examples for local testing such architecture. I did see examples for Google cloud, however it is not relevant as I want to host everything on-premise using kubernetes for scale.
The system should support heavy load of requests, for example processing multiple incoming video streams, per frame, so 24 requests per minute for 20-50 cameras (or more) could easily be the case.
Is GRPC good fit for such scenario?, not only for inter-microservice communication but also for the main API access gateway?
And for the "context", Maybe it should be implemented as live connection part with websockets or other protocol? I really want to simplify the development and use a single technology.
I could not fully understand from the documentation whether HTTP/2 and GRPC supports a long running bi-directional open connection.

Multi-client web-cam access and stream processing from python server in real-time

I am in the process of making a web application that essentially takes in some web-stream from the client via their browser, and in real-time, sends it to a python server (Flask probably) that processes the frames in real-time and sends a response to the user. Now the backend has to be capable of handling web-streams from multiple clients simultaneously.
I am trying to grasp the framework for this entire application. What I have in mind is the following:
The user accesses the web-cam via their browser (e.g using webcamJS), the frames are sent from the frontend to the back-end through a web-socket. The task here is to establish a seemless handshake between the multiple clients and their processing requests.
There is a need for concurrency if the processing is to be done in real-time, multiple threads of the same image-processing-algorithm need to be executed. My take is that I make use of the multiple threads for this purpose or is there a better way of doing this? Is this even a feasible approach as the image-processing-algorithm (trained model) takes some time to load up , so it has to be always initialized at the backend and not start from scratch at every request.
The response from image-processing-algorithm need to get back to the frontend and the process goes on.
What I really need help is in drawing out the complete framework of this implementation. Any suggestions on the modules/frameworks to use with some implementations would be greatly appreciated.
Thank you.
You can use Flask for your web server, Keras to process the videos.
The standard library multiprocessing module will also be helpful to treat multiple feeds at once.

how can i send some data from php to python

I have a running python application that needs to receive some data and process them. and I also have a PHP server that can get these data. I want to send JSON data from PHP to my python app.
anyway except running a python web server and send data to it, or insert into DB and get from DB with python?
thanks.
I tried using python cherryPy web server.
#Niklas D It would be easier to answer your question, if you can give some more context about the application or use case you want to solve.
Some further possibilities are:
Glue Code (I never did it with python and php only C++ with python, but you should be able to find examples on the internet e.g. https://wiki.python.org/moin/IntegratingPythonWithOtherLanguages#PHP )
Messaging Systems like RabbitMQ, ActiveMQ, ZeroMQ, etc.
Redis (I know you said except writing to a database, but Redis provides some features for publish subscribe https://redis.io/commands/pubsub which allows you to write to Redis from the one side and get data on the other side without polling the db all the time, which is the issue you have with using a database I guess) It's a bit easier to setup and use, than a messaging system.
TCP connection between the python and php application. https://medium.com/swlh/lets-write-a-chat-app-in-python-f6783a9ac170
If you want to send data to a python application using web protocols, i.e send POST, GET requests etc then you need to create a python web app to receive and handle those requests. Which in turn needs to be running off a webserver or you could build serverless functions to handle this, see https://serverless.com/
If you want to get data using a python application, i.e the python app sends POST and GET requests etc to your php app to ask for the JSON payload you can build an app using python's standard requests library https://docs.python.org/3/library/urllib.request.html or better still us the Requests package http://docs.python-requests.org/en/master/
Or you could do something and save the JSON file to disk and then open it with your python app. You'd need to set up scheduling or make your php app execute python code on the server... This last suggestion is a bad idea please don't unless your app is isolated and not publicly accessible or you know how to lock down your security.

Asynchronous Socket connections in Python or Node

I am creating an application that basically has multiple connections to a third party Chat Streaming API(Socket based).
The way it works is - Every user has an account on my app and another account on the third party app. He gives me an access token for the third party chat app and I connect to the third party API to stream his chats. This happens for hundreds of users.
I need to create a socket connection pool for every user and run parallel threads. I am using a python library(for that API) and am able to achieve real time feeds for single users. How do I implement an asynchronous socket connection pool in Python or NodeJS? I have a Linux micro instance on EC2 and I need to run this application for 1000 users.
I am exploring Redis+Tornado to implement this. Are there any better alternatives?
This will be messy and also a couple of things to consider.
If you are going to use multiple threads remember that you can only run so many per CPU as the OS permits, rather go multiprocessing.
If you are going async with long polling processes it will prevent other clients from processing requests.
Solution
When your application absolutely needs to be real-time I would suggest websockets for server-client interaction.
Then from your clients request start a single process that listens\polls on your streaming API using multiprocessing in python. So you will essentially create a separate process for each client.
And now, to make your WebSocketHandler and Background API Streamer interact with each other you can use the Observer Pattern (https://en.wikipedia.org/wiki/Observer_pattern) to notify the WebSocket that you have received data from the API.
Make sure that you assign a unique ID to every client and make sure that you only post the data to the intended client when using websockets.
EDIT:
Web:
Also on your question regarding Tornado. It is a good lightweight framework for running a couple of users maybe 1000. But anything more than that I would suggest looking at Django as it will allow you to be more productive in producing code and also there are lots of tools out there that the community have developed over time.
Database:
Red.is is a good choice if you need a very fast no-sql db, also have a look at mongodb. If you require a multi-region DB I would suggest going with Cassandra or CouchDB due to the partitioned nodes. The image below might help you better decide which DB to use.

Long-running connection HTTP server (Python)

I am trying to design a web application that processes large quantities of large mixed-media files coming from asynchronous processes. Each process can take several minutes.
The files are either uploaded as a POST body or pulled by the web server according to a source URL provided. The files can be processed by a variety of external tools in a synchronous or asynchronous way.
I need to be able to load balance this application so I can process multiple large files simultaneously for as much as I can afford to scale.
I think Python is my best choice for this project, but beside this, I am open to any solution. The app can either deliver the file back or rely on a messaging channel to notify the clients about the process completion.
Some approaches I thought I might use:
1) Use a non-blocking web server such as Tornado that keeps the connection open until the file processing is done. The external processing command is launched and the web server waits until the file is ready and pipes the resulting IO stream directly back to the web app that returns it. Since the processes sending requests are asynchronous, they might afford to wait (unless memory or some other issues come up).
2) Use a regular web server like Cherrypy (which I am more confident with) and have the webapp use a messaging channel to report the processing progress. The web server returns a HTTP response as soon as it receives the file, validates it and sends it to a background process. At the same time it sends a message notifying the process start. The background process then takes care of delivering the file to an available location and sending another message to the channel notifying the location of the new file. This solution looks more flexible than 1), but requires writing a separate script to handle the messages outside the web application, as well as a separate storage space for the temp files that have to be cleaned up at a certain point.
3) Use some internal messaging capability of any of the webserves mentioned above, which I am not familiar with...
Edit: something like CherryPy's pub-sub engine (http://cherrypy.readthedocs.org/en/latest/extend.html?highlight=messaging#publish-subscribe-pattern) could be a good solution.
Any suggestions?
Thank you,
gm
I had a similar situation come up with a really large scale data processing engine that my team implemented. We wanted to build our api calls in Flask, some of which can take many hours to complete, but have a way to notify the user in real time what is going on.
Basically what I came up with is was what you described as option 2. On the same machine that I am serving the flask app through apache, I created a tornado app that serves up a websocket that reports progress to the end user. Once my main page is served, it establishes the websocket connection to the tornado server, and the flask app periodically sends updates to the tornado app, and down to the end user. Even if the browser is closed during the long running application, apache keeps the request alive and processing, and upon logging back in, I can still see the current progress.
I wrote about this solution in some more detail here:
http://jonfeatherstone.com/2013/08/01/mongo-and-websockets-for-application-logging/
Good luck!

Categories