Real-time data on webpage with jQuery - python

I would like a webpage that constantly updates a graph with new data as it arrives. Regularly, all the data you have is passed to the page at the beginning of the request. However, I need the page to be able to update itself with fresh information every few seconds to redraw the graph.
Background
The webpage will be similar to this http://www.panic.com/blog/2010/03/the-panic-status-board/. The data coming in will temperature values to be graphed measured by an Arduino and saved to the Django database (this part is already complete).
Update
It sounds as though the solution is to use the jQuery.ajax() function ( http://api.jquery.com/jQuery.ajax/) with a function as the .complete callback that will schedule another request several seconds later to a URL that will return the data in JSON format.
How can that method be scheduled? With the .delay() function?

So the page must perform periodic jQuery.ajax calls with a url parameter set to a server's URL where the latest up-to-data information (possibly just as an incremental delta from the last instant for which the client has info -- the client can send that instant as a query parameter in the Ajax call) is served, ideally in JSON form. The callback at the completion of the async request can schedule the next ajax calls for a few seconds in the future, and then repaint the graph.
The fact that, server-side, you're using Django, doesn't seem all that crucial -- the server just needs to format a bunch of data as JSON and send it back, after all.

If the data is composed by on-the-fly graphics, you can just serve them (in gif, png or any other graphic format) through Django and reload each individually or even reload all the page at once.
It depends on the load and performance requirements, but you can start simply reloading all the page and then, if necessary, use AJAX to reload just each specific part (it's very easy to achieve with jQuery or prototype using updates). If you expect a lot of data, then you should change to generate the graphic on the client, using just JSON incremental data.

Related

Cloud Function http - send back results live as they arrive

I have a Cloud Function (Python) who does some long(not heavy) calculation that depend on other external APIs so the respond might take some time ( 30 seconds).
def test(request):
request_json = request.get_json()
for x in y:
r = get_external_api_respond()
calculate r and return partial respond
The problems and questions are :
Is there a way to start returning to the web client results as they arrive to the Function? right now I know http can only return once to the message and close connection.
Pagination in this case will be too complicated to achieve, as results depend on previous results, etc. Are there any solutions in Google Cloud to return live results as they come ? other type of Function ?
Will it be very expensive if the function stay open for a minute even tough it does not have heavy calculations, just doing multiple API request in loop ?
You need to use some intermediary storage, which you will top-up from your function, and read in your HTTP request from web page. I wouldn't call it producer-consumer pattern, really, as you produce once, but consumes as many times as you need to.
You can use a Table Storage or Blob Storage if you use Azure.
https://learn.microsoft.com/en-us/azure/storage/tables/table-storage-overview
https://azure.microsoft.com/en-gb/products/storage/blobs/
With Table, you can just add records as you get them calculated.
With Blob, you can use Append blob type, or just read and write blob again (it seems like you use single producer).
As a bonus, you can distribute your task across multiple functions and get results much faster. This is called scale-out.

Best approach to an API update project

I'm working in a personal project to segment some data from the Sendinblue Api (CRM Service). Basically what I try to achieve is generate a new score attribute to each user base on his emailing behavior. For that proposed, the process I've plan is as follows:
Get data from the API
Store in database
Analysis and segment the data with Python
Create and update score attribute in Sendin every 24 hours
The Api has a Rate limiting 400 request per minute, we are talking about 100k registers right now which means I have to spend like 3 hours to get all the initial data (currently I'm using concurrent futures to multiprocessing). After that I'll plan to store and update only the registers who present changes. I'm wondering if this is the best way to do it and which combinations of tools is better for this job.
Right now I have all my script in Jupyter notebooks and I recently finished my first Django project, so I don't know if I need a django app for this one or just simple connect the notebook to a database (PostgreSQL?), and if this last one is possible which library I have to learn to run my script every 24 hours. (i'm a beginner). Thanks!
I don't think you need Django except you want a web to view your data. Even so you can write any web application to view your statistic data with any framework/language. So I think the approach is simpler:
Create your python project, entry point main function will execute logic to fetch data from API. Once it's done, you can start logic to analyze and statistic then save result in database.
If you can query to view your final result by SQL, you don't need to build web application. Otherwise you might want to build a small web application to pull data from database to view statistic in charts or export in any prefer format.
Setup a linux cron job to execute python code at #1 and let it run every 24 at paticular time you want. Link: https://phoenixnap.com/kb/set-up-cron-job-linux

Handle load time of big JSON for autocomplete on the server side

Hello I have an app that have autocomplete suggestions for the user based on input in the search bar.
I use a package named fast_autocomplete which works great but I have a problem that each time I want to use the prediction I have to load 50MB of JSON data that I mad for the autocomplete (500K records).
While running it on a server side loading, parsing and sending back the data is quite slow for what you would except for autocomplete functionality. It less than a second and right now (testing locally) its takes few seconds.
Checking the issue it seems that it takes a lot of time to load the 50MB JSON for each request. Loading the data and building the autocomplete object each time a new request comes in is a waste of time.
For that I wondered if there is a way to keep the object loaded alive all the time and when a new HTTP request comes in the JSON is already loaded.
How big sites like Amazon, Ebay, Google makes there autocomplete so fast?
If I understand correctly you're loading your data for every request despite that data staying the same for each user. Why not cache that data? or store it outside of your request handler's scope?
auto_complete_data = open("...").read()
#route('/autocomplete')
def autocomplete():
# keep reusing the `auto_complete_data`

Aggregate multiple APIs request results using python

I'm working on an application that will have to use multiple external APIs for information and after processing the data, will output the result to a client. The client uses a web interface to query, once query is send to server, server process send requests to different API providers and after joining the responses from those APIs then return response to client.
All responses are in JSON.
current approach:
import requests
def get_results(city, country, query, type, position):
#get list of apis with authentication code for this query
apis = get_list_of_apis(type, position)
results = [ ]
for api in apis:
result = requests.get(api)
#parse json
#combine result in uniform format to display
return results
Server uses Django to generate response.
Problem with this approach
(i) This may generate huge amounts of data even though client is not interested in all.
(ii) JSON response has to be parsed based on different API specs.
How to do this efficiently?
Note: Queries are being done to serve job listings.
Most APIs of this nature allow for some sort of "paging". You should code your requests to only draw a single page from each provider. You can then consolidate the several pages locally into a single stream.
If we assume you have 3 providers, and page size is fixed at 10, you will get 30 responses. Assuming you only show 10 listings to the client, you will have to discard and re-query 20 listings. A better idea might be to locally cache the query results for a short time (say 15 minutes to an hour) so that you don't have to requery the upstream providers each time your user advances a page in the consolidated list.
As far as the different parsing required for different providers, you will have to handle that internally. Create different classes for each. The list of providers is fixed, and small, so you can code a table of which provider-url gets which class behavior.
Shameless plug but I wrote a post on how I did exactly this in Durango REST framework here.
I highly recommend using Django REST framework, it makes everything so much easier
Basically, the model on your APIs end is extremely simple and simply contains information on what external API is used and the ID for that API resource. A GenericProvider class then provides an abstract interface to perform CRUD operations on the external source. This GenericProvider uses other providers that you create and determines what provider to use via the provider field on the model. All of the data returned by the GenericProvider is then serialised as usual.
Hope this helps!

How to have Django return a template but keep executing a process after the template is returned?

I'm pretty new to web development, so I'm just trying to see if I have the big picture right for what I am trying to do. Forgive me if any terminology is wrong.
My Django app needs to do the following:
User uploads a file through his browser
File is processed by the server (can take up to an hour)
User sees the results in his browser
I'm having trouble on how to accomplish step 2...here is what I am thinking:
1.User uploads a file (pretty straightforward)
2.File is processed - a view function would go something like this:
def process(request):
a. (get file from the request)
b. (return a page which says "the server is running your job, results will be available in {ETA}")
c. (start processing the data)
3.User sees the results in his browser - Browser queries the server at regular intervals to see if the job is done. When the job ready, the browser gets the results.
My question is, in step 2 parts b and c, how can I return a response to the browser without waiting to the process to finish? Or, how can I ensure the process keeps running after I return the results to the browser? The process should ideally have access to the Django environment variables, as it will work with a database through Django's interface.
You need to off load the processing. You could use django-celery.

Categories