Django - Consuming a RESTful service asynchronously - python

I need to create a django web portal in which users can select and run ad-hoc reports by providing values, via forms, to parameters defined in each specific report. The view that processes the user’s report execution requests needs to make RESTFul service calls to a remote Jasper Reports Server where the actual output is generated.
I have already written the client to make the RESTful service calls to the remote server. Depending on how large the report is the service calls can take several minutes.
What is the best method for making the service call after the user’s form has been validated so that the call processes asynchronously (in the background) and the user can continue you use the web portal while their report is being generated.
Do I need to make an AJAX call when the parameters form is submitted or should I start a new thread for the RESTful client in the view after the form has validated? Or something else?

django-celery is a popular choice for async tasks, i usually use greenlets as im used to them.
Then to notify the user you can use the notification framework to tell the client that something is done.

Related

How to handle high response time

There are two different services. One service -Django is getting the request from the front-end and then calling an API in the other service -Flask.
But the response time of the Flask service is high and if the user navigates to another page that request will be canceled.
Should it be a background task or a pub/sub pattern? If so, how to do it in the background and then tell the user here is your last result?
You have two main options possible:
Make an initial request to a "simple" view of Django, which load a skeleton HTML page with a spinner where some JS will trigger a XHR request to a second Django view which will contain the other service (Flask) call. Thus, you can even properly alert your user the loading takes times and handle the exit on the browser side (ask confirmation before leaving/abort the request...)
If possible, cache the result of the Flask service, so you don't need to call it at each page load.
You can combine those two solutions by calling the service in a asynchronous request and cache its result (depending on context, you may need to customize the cache depending on the user connected for example).
The first solution can be declined with pub/sub, websockets, whatever, but a classical XHR seems fine for your case.
On our project, we have a couple of time-expensive endpoints. Our solution was similar to a previous answer:
Once we receive a request we call a Celery task that does its expensive work in async mode. We do not wait for its results and return a quick response to the user. Celery task sends its progress/results via WebSockets to a user. Frontend handles this WS message. The benefit of this approach is that we do not spend the CPU of our backend. We spend the CPU of the Celery worker that is running on another machine.

Slack: Is it possible to initiate an API call using user input from slack to another system?

I have been asked to try and get something working in our slack environment for our campus locations to use. The goal is to have a user input the location which would initiate the API call to the other system and return some basic high level system health stats.
I am familiar with how to setup webhooks to slack, just not sure if it's possible to do this or not with slack.
The slack API (https://api.slack.com/) is fairly complex to get set up, but it will do what you want once you get there. It has a web API that you can register to receive hooks from when things appear in messages or chats, for example, and thus you can trigger things to run when people say certain things, for example.
If I understand correctly, you want a user to input a location on slack, and based on their input to make an API call to a different service.
You have several options to get the input from the user:
You can create a bot that the user can chat with
You can create a shortcut or workflow that users can use to fill some kind of form
You can allow for interactions on your application's home page
All these options will get slack to send a payload to some endpoint you define. You will have to set up some basic back end to handle this and call your external APIs from.
I'm currently working on a similar project and recommend using some serverless, fast setup. I have opted for Lambda and API gateway for this. The experience is:
The user goes to the app home page and presses a button
The user gets a form to fill
On form submission, slack sends a payload to an endpoint set via API gateway
API gateway summons a lambda function
The function parses and validates the payload, and ultimately makes a request to my external API

Slack API - queue for retrieving slash commands

I am tasked with building a Slack slash command app in Python which will respond to incoming slash commands. However, for security reasons, I am not allowed to open the firewall for incoming webhooks from Slack. Is there instead a way to check a queue of sent slash commands?
For example, a user types "/myslashapp" in a specific channel. My app will need to do something like call an endpoint every 30 seconds and check if the "/myslashapp" command was sent. If it was, my app should trigger a Lambda function in AWS.
Based on reading the Slack API docs, I haven't found any way to do this other than perhaps the RTM API, though it seems like overkill and still requires an open socket.
No. The Slack API has no build-in support that allows you to pull requests after-the-fact from a queue instead of receiving them from Slack when they happen.
The RTM API might work for you, because the connection to Slack is initiated from your side. So - provided you firewall allows it - would also work from within an intranet. However, you can not do slash commands with the RTM API or any of the other interesting interactive Slack features like buttons. Only simple messages and events.
You could implement your own bridging solution and pull from it. But I don't think that a pulling solution would work, because it creates a lot of latency for your app. Users expect an immediate response to their slash command, not a delay of 30 secs or more.
So in summary I think you only have two valid options:
Host your app internally and use a secure VPN like ngrok to expose a public URL to your app.
Run your app on the Internet and let it have a secure connection to your Intranet for accessing internal data. (similar to e.g. a shopping web site would work, that has a public app on the Internet, but also can transmit orders to the business applications on the companies Intranet.)

How to know when someone returns HIT?

For an ExternalQuestion, when a worker views an HIT in preview mode, the url that is sent is something like:
/mturk?assignmentId=ASSIGNMENT_ID_NOT_AVAILABLE&hitId=3FSEU3P2NR0J4ISYGCVR597YQFLRRR
And then when the user Accepts the HIT, it updates the assignmentId and adds a workerId:
/mturk/?assignmentId=384PI804XS1ASN65RQHJZ77QLSES0H&hitId=3B9XR6P1WEVFQNSWCA0S33G3YCPBJ7&workerId=A1D23ERS0X4J9D&turkSubmitTo=https%3A%2F%2Fworkersandbox.mturk.com
Is there a way to know if an HIT is Returned and not finished? I tried emulating this behavior as a worker, and no request was send to my url. How would I tell then?
This was recently asked on the AWS Developer Forum. I'll copy a modified version of my answer from there:
You can use the Notifications API to trigger a notification every time a worker accepts an assignment. You could then catalog these notifications and compare them to the set of actual responses.
If you are hosting your HIT on your server, you could configure your server to log every view of a HIT (every view would log the workerId for the worker viewing it but with an ASSIGNMENTID_NOT_AVAILABLE value for the assignmentId, while accepted assignments that are returned would register an assignmentId that was never submitted to MTurk. For HITs hosted by AWS (e.g., those created via the requester user interface, or setup as QuestionForm or HTMLQuestion HITs via the API), this option is not available to you.

RESTful API across multiple users

I am somewhat new to RESTful APIs.
I'm trying to implement a python system that will control various tasks across multiple computers, with one computer acting as the controller.
I would like all these tasks to be divided amongst multiple users (ex. task foo runs as user foo, and task bar runs as user bar) while handling all requests with a central system. The central system should also act as a simple web server and be able to server basic pages for status purposes.
It it possible to have each user register a "page" with a central server for the API and have the server pass all requests to the programs (probably written in Python)?
Sure, you just need the clients to POST their notifications URL to the server, so that the server can then POST them back with the requests. These are called Webhooks by some people.
Also see RESTful Webhooks.
Yes. Keep in mind that being RESTful is merely a way to organize your web application's URL's in a standard way. You can build your web application to do whatever you want.

Categories