I am writing an application in Pylons that relies on the output of some system commands such as traceroute. I would like to display the output of the command as it is generated rather than wait for it to complete and then display all at once.
I found how to access the output of the command in Python with the answer to this question:
How can I perform a ping or traceroute in python, accessing the output as it is produced?
Now I need to find a way to get this information to the browser as it is being generated. I was planning on using jQuery's loadContent() to load the output of a script into a . The problem is that Pylons controllers use return so the output has to be complete before Pylons renders the page and the web server responds to the client with the content.
Is there any way to have a page display content as it is generated within Pylons or will this have to be done with scripting outside of Pylons?
Basically, I'm trying to do something like this:
http://network-tools.com/default.asp?prog=trace&host=www.bbc.co.uk
pexpect will let you get the output as it comes, with no buffering.
To update info promptly on the user's browser, you need javascript on that browser sending appropriate AJAX requests to your server (dojo or jquery will make that easier, though they're not strictly required) and updating the page as new responses come -- without client-side cooperation (and JS + AJAX is the simplest way to get that cooperation), there's no sensible way to do it on the server side alone.
So the general approach is: send AJAX query from browser, have server respond as soon as it has one more line, JS on the browser updates contents then immediately sends another query, repeat until server responds with an "I'm all done" marker (e.g. an "empty" response may work for that purpose).
You may want to look at this faq entry. Then with JS, you always clear the screen before writing new stuff.
I haven't tried it with pylons, but you could try to show the output of the slow component in an iframe on the page (using mime type text/plain) and yield each chunk to the iframe as it is generated. For fun I just put this together as a
WHIFF demo. Here is the slowly generated web content wsgi application:
import time
def slow(env, start_response):
start_response("200 OK", [('Content-Type', 'text/plain')])
return slow_generator()
def slow_generator():
yield "slowly generating 20 timestamps\n"
for i in range(20):
yield "%s: %s\n" % (i, time.ctime())
time.sleep(1)
yield "done!"
__wsgi__ = slow
This file is deployed on my laptop at: http://aaron.oirt.rutgers.edu/myapp/root/misc/slow.
Here is the WHIFF configuration template which includes the slow page in an
iframe:
{{env whiff.content_type: "text/html"/}}
Here is an iframe with slowly generated content:
<hr>
<iframe frameborder="1" height="300px" width="300px" scrolling="yes"
style="background-color:#99dddd;"
src="slow"
></iframe>
<hr>
Isn't that cool?
This is deployed on my laptop at http://aaron.oirt.rutgers.edu/myapp/root/misc/showSlowly.
hmmm. I just tried the above link in safari and it didn't work right... apparently there are some browser differences... Seems to work on Firefox at least...
Related
I have a usecase where while browsing a website in some of webpage there is a button. I want to overload the function to actually run my local python script instead of going with href link(pointing to some page on server)
I have three solution -
Repeated polling to check if button is clicked or not. Whenever it get's clicked I will call the required function. This is certainly not a good idea as it will reduce browser speed.
Overriding the button function in source code.
Creating a new button for all pages which will call the function only if the actual button is also present in page (otherwise we will show that 'This action can't be performed for this page')
I think 2 and 3 would be better if they could be done but I couldn't get much resources on google. Any link/answer on how can this be done would be of great help.
The easiest way to do this is probably to run a Python Web server on the local machine, which runs your Selenium code when it receives a particular HTTP request. You could write your server like this using Flask, for example:
from flask import Flask, abort
app = Flask(__name__)
#app.route('/dosomestuff', methods=['POST'])
def display():
# Check that the request is coming from t
if request.remote_addr != '127.0.0.1':
abort(403)
do_some_stuff() # Call whatever code you want to run here
return "Done"
if __name__ == '__main__':
app.run()
(Using Flask is probably overkill here, and you could probably do it without a library, but Flask is the method I'm most familiar with.)
Then on your Web page you'd just set up a button to send the appropriate request:
<form action="http://localhost:8080/dosomestuff" method="post">
<input type="submit" value="Do some stuff">
</form>
Then, so long as your local server is running at the time, clicking the button should trigger your Python code.
Step 2 can be done by using chrome/firefox extension. It is not possible to do it with Selenium because it requires editing DOM.
This link might be helpful.
I currently have a Flask web server that pulls data from a JSON API using the built-in requests object.
For example:
def get_data():
response = requests.get("http://myhost/jsonapi")
...
return response
#main.route("/", methods=["GET"])
def index():
return render_template("index.html", response=response)
The issue here is that naturally the GET method is only run once, the first time get_data is called. In order to refresh the data, I have to stop and restart the Flask wsgi server. I've tried wrapping various parts of the code in a while True / sleep loop but this prevents werkzeug from loading the page.
What is the most Pythonic way to dynamically GET the data I want without having to reload the page or restart the server?
You're discussing what are perhaps two different issues.
Let's assume the problem is you're calling the dynamic data source, get_data(), only once and keeping its (static) value in a global response. This one-time-call is not shown, but let's say it's somewhere in your code. Then, if you are willing to refresh the page (/) to get updates, you could then:
#main.route("/", methods=['GET'])
def index():
return render_template("index.html", response=get_data())
This would fetch fresh data on every page load.
Then toward the end of your question, you ask how to "GET the data I want without having to reload the page or restart the server." That is an entirely different issue. You will have to use AJAX or WebSocket requests in your code. There are quite a few tutorials about how to do this (e.g. this one) that you can find through Googling "Flask AJAX." But this will require an JavaScript AJAX call. I recommend finding examples of how this is done through searching "Flask AJAX jQuery" as jQuery will abstract and simplify what you need to do on the client side. Or, if you wish to use WebSockets for lower-latency connection between your web page, that is also possible; search for examples (e.g. like this one).
To add to Jonathan’s comment, you can use frameworks like stimulus or turbo links to do this dynamically, without having to write JavaScript in some cases as the frameworks do a lot of the heavy lifting. https://stimulus.hotwired.dev/handbook/origin
I have two scripts running, one on port :80 and one on port :81. Because some of our users are having issues with stuff happening on the server with port :81, I'm trying to implement a workaround like this;
Old way of doing it, which works fine for most users:
AngularJS app makes request to example.com:81/getpdf/1
Flask server generates PNG and PDF files using PhantomJS and ImageMagick using two separate subprocess.Popen calls and the .wait() method
Using Flask's send_file(), the PDF gets sent back to the user and starts downloading
My workaround for this issue:
AngularJS makes request to example.com/getpdf/1
Flask server (:80) makes a new GET request, r = requests.get(url_with_port_81), faking the old AngularJS request to create the PNG/PDF
Instead of using send_file(), I now return the path of the generated PDF
I return send_file(r.text)
Now, using my workaround, the subprocesses I run to create the PNG/PDFs somehow crash. I have to sudo pkill python, and only when I do so, I'm getting a PNG with no data in the folder on my server.
Basically, PhantomJS has run but hasn't loaded any data (only html/css, but no important stuff that needs to come from the Flask server) and crashes. How is this even possible? I'm just faking the request the browser makes using requests.get, or am I not aware of something here?
I thought subprocess.Popen is non-blocking, so my requests for data could still be answered to fill the PNG/PDFs?
I finally found the reason my subprocess kept crashing.
Apparently, it's a bug in Python < 2.7.3, described here: http://bugs.python.org/issue12786
I had to use 'close_fds=True' in my Popen call and all was fixed. Thanks for your effort either way, #Mark Hildreth!
I use to program on python. I have started few months before, so I am not the "guru" type of developer. I also know the basics of HTML and CSS.
I see few tutorials about node.js and I really like it. I cannot create those forms, bars, buttons etc with my knowledge from html and css.
Can I use node.js to create what user see on browser and write with python what will happen if someone push the "submit" button? For example redirect, sql write and read etc.
Thank you
You can call python scripts in the back end at the node server, in response to button click by user. For that you can use child_process package. It allows you to call programs installed on your machine.
For example here is how to run your script when user POST's something on /reg page:
app.post('/reg', function(request, response){
spawn = require('child_process').spawn;
path = "location of your script";
// create child process of your script and pass two arguments from the request
backend = spawn('python',[path, request.body.name, request.body.email]);
backend.on('exit', function(code) {
console.log(path + ' exited with code ' + code);
if(code==0)
response.render('success'); //show success page if script runs successfully
else
response.redirect('bad');
});
});
Python has to be installed in your system, along with other python libraries you will need. It cannot respond / redirect to requests to node, else why would you use node then. When in Rome, do as the Romans do. Use JavaScript in node, calling external programs is not as fast using JS libraries.
Node.js is a serverside JavaScript environment (like Python). It runs on the server and interacts with the database, generates the HTML that the clients see and isn't actually directly accessed by the browser.
Browsers, on the other hand, run clientside JavaScript directly.
If you want to use Python on the server, there are a bunch of frameworks that you can work with:
Django
Flask
Bottle
Web.py
CherryPy
many, many more...
I think you're thinking about this problem backwards. Node.js lets you run browser Javascript without a browser. You won't find it useful in your Python programming. You're better off, if you want to stick with Python, using a framework such as Pyjamas to write Javascript with Python or another framework such as Flask or Twisted to integrate the Javascript with Python.
I'm pretty new to web development, so I'm just trying to see if I have the big picture right for what I am trying to do. Forgive me if any terminology is wrong.
My Django app needs to do the following:
User uploads a file through his browser
File is processed by the server (can take up to an hour)
User sees the results in his browser
I'm having trouble on how to accomplish step 2...here is what I am thinking:
1.User uploads a file (pretty straightforward)
2.File is processed - a view function would go something like this:
def process(request):
a. (get file from the request)
b. (return a page which says "the server is running your job, results will be available in {ETA}")
c. (start processing the data)
3.User sees the results in his browser - Browser queries the server at regular intervals to see if the job is done. When the job ready, the browser gets the results.
My question is, in step 2 parts b and c, how can I return a response to the browser without waiting to the process to finish? Or, how can I ensure the process keeps running after I return the results to the browser? The process should ideally have access to the Django environment variables, as it will work with a database through Django's interface.
You need to off load the processing. You could use django-celery.