I have a Python web application in which one function that can take up to 30 seconds to complete.
I have been kicking off the process with a cURL request (inc. parameters) from PHP but I don't want the user staring at a blank screen the whole time the Python function is working.
Is there a way to have it process the data 'in the background', e.g. close the http socket and allow the user to do other things while it continues to process the data?
Thank you.
You should use an asynchronous data approach to transfer data from a PHP script - or directly from the Python script, to an already rendered HTML page on the user side.
Check a javascript framework for the way that is easier for you to do that (for example, jquery). Then return an html page minus results to the user, with the javascript code to show a "calculating" animation, and fetch the reslts, in xml or json from the proper URL when they are done.
Related
I created a simple python script which takes a URL as input and once passed will do curl using multiple proxies and show the response code, now I want to create a webpage where others can use(my colleagues) as it will help them too, I want to create simple webpage which let them select set of proxy addresses, and input URL and upon submission, it will run the script on a machine(webserver) and populate the result to webpage using dynatable or datatable frameworks, but am not sure how or if it is possible as I didn't worked much in webserver thing, I want to know what tools I will need and how do I design it.
If python script can be called in terminal(as it needs to run curl) and show result on webpage based on output from script(which I will export to csv file), how can I do that? what to use xampp, wamp, lamp etc ?
You need a framework for this, something that will listen to your request coming from the front-end (webpage), there are tons out there as python framework, you can check bottle framework as a starting point.
So the flow would be something below.
1. from webpage, a request is sent to the backend
2. backend receive the request and run your logic (connecting to some server, computing logic, etc)
3. once backend process is done, backend then send the response to webpage
you can either use a REST approach or use templating functionality of the framework
You will need a request. You can do this in JavaScript with an ajax request there are frameworks to hook it up straight to your python which allow you not to code the JavaScript https://www.w3schools.com/xml/ajax_intro.asp there are many JavaScript frameworks that will make this easier/ shorter to code.
I have written my Python script to take an inputted argument via command line.
My script simply take a URL (inputted via command line), then runs the script; which counts how many lines of HTML code is on the URL page. Its a very simple script.
I would like to put this script on my website. If you click a button on my webpage, the URL of my webpage is sent to the script, then it process the information, and returns it to my website.
How would I be able to do this? What would the back-end architecture look like? How can I increase my processing speeds? Will my script be able to process multiple clicks from different users simultaneously?
There are a couple ways you could do this. The first and the simplest would be CGI or Common Gateway Interface. The second would be a python web framework like flask or django , which you could configure via wgsi so like you said, it would run when it's url is accessed.
I've written an algorithm in python and a web interface around that. After you submit the form and start the algorithm, I'd like to push and update data on the page as it's running. How can I accomplish this?
To have real-time or semi-real time communications between the web page the options are
Automatically refresh the page after certain seconds using meta refresh tag in HTML <head>
Fetch updated data with JavaScript and AJAX HTTP GET: https://api.jquery.com/jquery.get/
Use server-sent sent events: http://www.html5rocks.com/en/tutorials/eventsource/basics/
Use WebSockets: http://www.html5rocks.com/en/tutorials/websockets/basics/
All approaches, excluding the first one, require rudimentary JavaScript skills besides knowing server-side Python. The latter two approaches recommend advanced understanding of real-time communications. Thus, if you are not familiar with the web development I recommend picking the meta refresh tag.
On the server side you need to start a process or a thread which to handle the long running process, then have this process to write its progress to a database. When the web UI updates itself, it reads the latest results from the database and pushes/pulles them back to the browser.
I am new to the programming world and trying out something with Python.
My requirement is to have http web server(built using BaseHTTPServer) that runs forever, which takes an input binary file through HTML form based on user selection and returns a set of HTML files back on the web client.
As part of this when the user is selecting his specific input file, there are set of folders created with HTML files written inside those folders in the server, i thought of putting in a tidy up functionality for these folders on the server. So that everyday the tidy up would clean up the folders automatically based on a configuration.
i could build both these modules in my script(http web service & tidy up on server), specifically the tidy up part is achieved using python's sched module
Both of these functionalities are working independently, i.e
when i comment out the function for tidy up, i can access the server url in the browser and the index.html page shows up correctly and further(accepts binary, parsing happens and output htmls are returned)
when i comment out the function for http server, based on the configuration set, i am able to ensure the tidy up functionality is working
But when i have both these functions in place, i see that the tidy up function works/is invoked correctly for the scheduled time, but the index.html page is not loaded when i request for the server on the browser
I researched on the sched module enough to understand that it is just to schedule multiple events on the system by setting time delays and priorities
Not able to work both the functionality
Questions:
Is this a correct approach, using sched to achieve the tidy up?
If yes, what could be the reason that the http service functionality is blocked and only the tidy up is working?
Any advice would be helpful. Thanks
For now, changed the function call for the tidy up feature, by using the background scheduler implementation of the APScheduler module of python.
This does not impact the function for serving http requests and has currently solved my problem
Ok i decided to post the question here because i really don't know what to do or even if its possible. You might tell me it's a repost or so but i aready read similar posts about it and it didn't helped me out.
Here is the deal. I have an admin interface with django and want to download a file from an external site on my server with a progressbar showing the percentage of the download.
I can't do anything while it's downloading. I tried to run a command with call_command within a view but it's the same.
Is it because Django server is single threaded? So, is it even possible do achieve what i want to do ?
Thanks in advance,
It's possible but takes some jumps though the metaphorical hoops. My answer isn't Django specific, you'll need to translate it to your framework.
Start a thread that does the actual download. While it downloads, it must update some data structure in the user's session (total size of the download, etc).
In the browser, start a timer which does AJAX requests to a "download status URL"
Create a handler for this URL which takes the status from the session and turns that into JSON or a piece of HTML which you send to the browser.
In the AJAX handler's success method, take the JSON/HTML and put it into the current page. Unless the download is complete (this part is more simple with JSON), restart the timer.