I have an input form with couple of input fields mostly numerical. I have a python script which takes the input values then performs some medium calculations (arrays, loops, etc) and gives results.
I would like to display results without a page refresh below the input form. Simplest example would be to display an average from the three input fields below when user clicks calculate button or instantly without even clicking.
Could anybody point me to the right direction how this would be implemented? I don't know what's the best framework to use for this. Just Ajax or Angular? Perform calls on client side or server?
Related
I want to update my plotly.js graph every time user clicks and select specific value from dropdown menu.
Just want to know best practice to store data, so I can query based upon user selection.
So it's good to call API every time for selection OR get data in JavaScript as first call when page loads and use it again and again by filtering it?
I'm just wondering if we load all raw data in 1 call and if it's too big for user's browser.
Please suggest best practice to update graphs
PS : I'm using Flask as framework and generating graphs with plotly.js
I have a pair of time field inputs in my Django forms. I wanted to create the form in a way that allows the user to enter the time in whichever format they would like and the program will reformat to the correct type.
Not sure what is the best way to go.
Using Django validation but the user does not get feedback about input.
Using html5 time input sounds promising but it has poor support for IE, which users are expected to use.
Javascript/jquery but the user can change the Javascript code to bypass client-side validation.
Any suggestions?
You should check arrow.
Arrow is a Python library that offers a sensible, human-friendly approach to creating, manipulating, formatting and converting dates, times, and timestamps
I'm working on a Django application that consists of a scraper that scrapes thousands of store items (price, description, seller info) per day and a django-template frontend that allows the user to access the data and view various statistics.
For example: the user is able to click on 'Item A', and gets a detail view that lists various statistics about 'Item A' (Like linegraphs about price over time, a price distribution, etc)
The user is also able to click on reports of the individual 'scrapes' and get details about the number of items scraped, average price. Etc.
All of these statistics are currently calculated in the view itself.
This all works well when working locally, on a small development database with +/100 items. However, when in production this database will eventually consist of 1.000.000+ lines. Which leads me to wonder if calculating the statistics in the view wont lead to massive lag in the future. (Especially as I plan to extend the statistics with more complicated regression-analysis, and perhaps some nearest neighbour ML classification)
The advantage of the view based approach is that the graphs are always up to date. I could offcourse also schedule a CRONJOB to make the calculations every few hours (perhaps even on a different server). This would make accessing the information very fast, but would also mean that the information could be a few hours old.
I've never really worked with data of this scale before, and was wondering what the best practises are.
As with anything performance-related, do some testing and profile your application. Don't get lured into the premature optimization trap.
That said, given the fact that these statistics don't change, you could perform them asynchronously each time you do a scrape. Like the scrape process itself, this calculation process should be done asynchronously, completely separate from your Django application. When the scrape happens it would write to the database directly and set some kind of status field to processing. Then kick off the calculation process which, when completed, will fill in the stats fields and set the status to complete. This way you can show your users how far along the processing chain they are.
People love feedback over immediate results and they'll tolerate considerable delays if they know they'll eventually get a result. Strand a user and they'll get frustrate more quickly than any computer can finish processing; Lead them on a journey and they'll wait for ages to hear how the story ends.
I'm working on a web app in Python (Flask) that, essentially, shows the user information from a PostgreSQL database (via Flask-SQLAlchemy) in a random order, with each set of information being shown on one page. Hitting a Next button will direct the user to the next set of data by replacing all data on the page with new data, and so on.
My conundrum comes with making the presentation truly random - not showing the user the same information twice by remembering what they've seen and not showing them those already seen sets of data again.
The site has no user system, and the "already seen" sets of data should be forgotten when they close the tab/window or navigate away.
I should also add that I'm a total newbie to SQL in general.
What is the best way to do this?
The easiest way is to do the random number generation in javascript at the client end...
Tell the client what the highest number row is, then the client page keeps track of which ids it has requested (just a simple js array). Then when the "request next random page" button is clicked, it generates a new random number less than the highest valid row id, and providing that the number isn't in its list of previously viewed items, it will send a request for that item.
This way, you (on the server) only have to have 2 database accessing views:
main page (which gives the js, and the highest valid row id)
display an item (by id)
You don't have any complex session tracking, and the user's browser is only having to keep track of a simple list of numbers, which even if they personally view several thousand different items is still only going to be a meg or two of memory.
For performance reasons, you can even pre-fetch the next item as soon as the current item loads, so that it displays instantly and loads the next one in the background while they're looking at it. (jQuery .load() is your friend :-) )
If you expect a large number of items to be removed from the database (so that the highest number is not helpful), then you can instead generate a list of random ids, send that, and then request them one at a time. Pre-generate the random list, as it were.
Hope this helps! :-)
You could stick the "already seen" data in a session cookie. Selecting random SQL data is explained here
I have a user input form here:
http://www.7bks.com/create (Google login required)
When you first create a list you are asked to create a public username. Unfortuantely currently there is no constraint to make this unique. I'm working on the code to enforce unique usernames at the moment and would like to know the best way to do it.
Tech details: appengine, python, webapp framework
What I'm planning is something like this:
first the /create form posts the data to /inputlist/ (this is the same as currently happens)
/inputlist/ queries the datastore for the given username. If it already exists then redirect back to /create
display the /create page with all the info previously but with an additional error message of "this username is already taken"
My question is:
Is this the best way of handling server side validation?
What's the best way of storing the list details while I verify and modify the username?
As I see it I have 3 options to store the list details but I'm not sure which is "best":
Store the list details in the session cookie (I am using GAEsessions for cookies)
Define a separate POST class for /create and post the list data back from /inputlist/ to the /create page (currently /create only has a GET class)
Store the list in the datastore, even though the username is non-unique.
Thank you very much for your help :)
I'm pretty new to python and coding in general so if I've missed something obvious my apologies.
Tom
PS - I'm sure I can eventually figure it out but I can't find any documentation on POSTing data using the webapp appengine framework which I'd need in order to do solution 2 above :s maybe you could point me in the right direction for that too? Thanks!
PPS - It's a little out of date now but you can see roughly how the /create and /inputlist/ code works at the moment here: 7bks.com Gist
I would use Ajax to do an initial validation. For example as soon as the user name input box loses focus I would in the background send a question to the server asking if the user name is free, and clearly signal the result of that to the user.
Having form validation done through ajax is a real user experience delight for the user if done correctly.
Of course before any of the data was saved I would definitely redo the validation server side to avoid request spoofing.
jQuery has a nice form validation plugin if you are interested. http://docs.jquery.com/Plugins/validation.
In my career, I've never gotten around having to validate server side as well as client side though.
About the storing of the list (before you persist it to the datastore). If you use ajax to validate the user name you could keep the other fields disabled until a valid user name is filled in. Don't allow the form to be posted with an invalid user name!
That would perhaps solve your problem for most cases. There is the remote possibility that someone else steals the user name while your first user is still filling in his list of books. If you want to solve that problem I suggest simply displaying the list as you got it from the request from the user. He just sent it to you, you don't have to save it anywhere else.
Can you use the django form validation functionality (which I think should just abstract all this away from you):
http://code.google.com/appengine/articles/djangoforms.html
Search in that page for "adding an item" - it handles errors automatically (which I think could include non-unique username).
Warning: also a beginner... :)