I want to show the processing information or log in the original page when the submitted request is being served until it completes the execution. I thought it would be meaningful to the user to know what is happening behind the request.
I don't find a clue to do so though, can you guys help me out as how people are doing like this one below one - for your reference
http://www.xml-sitemaps.com/
there are two ways i could imagine handling this:
have your backend script (python) output the information of a long process to a log of some sort (text file, database, session, etc...) and then have javascript grab the information via ajax and update the current page.
same deal, but instead of ajax just have a meta refresh on the page which would grab the latest updated information.
you may use python threading, which will create a new process in background
and display your messages on that thread
hope it helps ;)
Related
My problem is: I need to generate a csv file from the data I have in the database and then download it. Seems easy, but I cannot find a solution that will allow me to do it in the background so the user won't be redirected anywhere.
What I need is user clicking a button and the download starting immediately with the user staying on the same page. Something like a simple html download attribute, but with actually generating a file first. I cannot generate file in advance as the data changes all the time and I need to output the most recent version of it, also, user may specify some filters for the data, so I need to take it into account.
I use Django with Nginx as a production server. Django should always return a response, so I'm not sure I would be able to do what I want at all. Is there any different way to do it then?
I've searched for a long time, but unable to find anything similar to my request. Thanks everyone!
I created a web that helps people to understand how to make a certain food. And I am trying to gather the data which recipe most of people want to get. What I am asking is "Is there any way that I can get the information when people search a certain information and also on which page they close the browser??? should I look at session or log or else?
I really need to track on which page people close the browser.
make a javascript beforeunload function callback, and have that send a request to a django view to track the page they left on
I googled a question related to that: How to capture the browser window close event?
Here is my problem. I want to use Flask framework with python in order to make the web page with the form. After user submits it via browser, he is redirected to another page. And after the page was loaded the specific task (piece of code) should be executed even if the user already closed the page. After code executed user gets email.
Why I need this? Because the code will be executing for long time. In order user doesn't need to wait, I need to solve this problem.
Can you suggest something? May be some other ways (prefferable with Flask)?
You should run background task after a user submits the form. One possible solution is use something like Celery.
I have a webpage showing some data. I have a python script that continuously updates the data(fetches the data from database, and writes it to the html page).It takes about 5 minutes for the script to fetch the data. I have the html page set to refresh every 60 seconds using the meta tag. However, I want to change this and have the page refresh as soon as the python script updates it, so basically I need to add some code to my python script that refreshes the html page as soon as it's done writing to it.
Is this possible ?
Without diving into complex modern things like WebSockets, there's no way for the server to 'push' a notice to a web browser. What you can do, however, it make the client check for updates in a way that is not visible to the user.
It will involve writing Javascript & writing an extra file. When writing your main webpage, add, inside Javascript, a timestamp (Unix timestamp will be easiest here). You also write that same timestamp to a file on the web server (let's call it updatetime.txt). Using an AJAX request on the page, you pull in updatetime.txt & see if the number in the file is bigger than the number stored when you generate the document, refresh the page if you see an updated time. You can alter how 'instantly' the changes get noticed but controlling how quickly you poll.
I won't go into too much detail on writing the code but I'd probably just use $.ajax() from JQuery (even though it's sort of overkill for one function) to make the calls. The trick to putting something on a time in JS is setinterval. You should be able to find plenty of documentation on using both of them already written.
I'm trying to read in info that is constantly changing from a website.
For example, say I wanted to read in the artist name that is playing on an online radio site.
I can grab the current artist's name but when the song changes, the HTML updates itself and I've already opened the file via:
f = urllib.urlopen("SITE")
So I can't see the updated artist name for the new song.
Can I keep closing and opening the URL in a while(1) loop to get the updated HTML code or is there a better way to do this? Thanks!
You'll have to periodically re-download the website. Don't do it constantly because that will be too hard on the server.
This is because HTTP, by nature, is not a streaming protocol. Once you connect to the server, it expects you to throw an HTTP request at it, then it will throw an HTTP response back at you containing the page. If your initial request is keep-alive (default as of HTTP/1.1,) you can throw the same request again and get the page up to date.
What I'd recommend? Depending on your needs, get the page every n seconds, get the data you need. If the site provides an API, you can possibly capitalize on that. Also, if it's your own site, you might be able to implement comet-style Ajax over HTTP and get a true stream.
Also note if it's someone else's page, it's possible the site uses Ajax via Javascript to make it up to date; this means there's other requests causing the update and you may need to dissect the website to figure out what requests you need to make to get the data.
If you use urllib2 you can read the headers when you make the request. If the server sends back a "304 Not Modified" in the headers then the content hasn't changed.
Yes, this is correct approach. To get changes in web, you have to send new query each time. Live AJAX sites do exactly same internally.
Some sites provide additional API, including long polling. Look for documentation on the site or ask their developers whether there is some.