I have a simulator application that continuously spits out data, formatted in JSON, to a given host name and port number (UDP). I would like to be able to point the simulator output to a Django web application so that I can monitor/process the data as it comes in.
How do I receive and process data in real time using Django? What tools or packages are available to accomplish this? I did come across this answer: How to serve data from UDP stream over HTTP in Python?, but I don't completely understand.
Ex: Similar to this page: http://money.cnn.com/data/markets/
ALSO, I don't need to store any of the streaming data in a database. I just need to perform lookups based on the streaming data. Maybe it's not a Django issue at all?
Using Javascript.
Create a webpage with all the results, and then use javascript to collect the data from the page, and update it every X seconds.
Have the webpage be the JSON data, and the javascript grab it an interpret it.
get html code using javascript with a url
Then update the page using javascript. ww3 schools has great JS tutorials
Related
So I had a number of amino acid sequence strings that I wanted to use as input into a tool that studies its interactions with certain components of the human immune system (http://www.cbs.dtu.dk/services/NetMHCcons/).
I wanted to ask what, if any, would be a way of accessing, inputting data and getting the output, via a script (R or python preferably). My main issue was I had a lot of sequences that need to be queried separately so wanted to automate the whole thing. The website has one field that reads "Submission" which takes in the string input. There is another field "select species/loci" which gives a drop down menu from which an option needs to be selected. Lastly there's a "submit" button. The output simply loads on the page after hitting submit.
I've tentatively poked around with RSelenium and Rcurl but wanted to ask if there was a more efficient method.
I took a look at what it'd take to send a POST request to this service from Python, and it looks possible:
this form takes in "multipart/form-data" (see: How to send a "multipart/form-data" with requests in python?), you'll need to send your data in this format. You could inspect a request from the browser (using the dev tools) and copy the fields from there as a starting point.
once the form is submitted, it doesn't give you the result right away. You'd need to get your job ID from the response, and then poll the URL: http://www.cbs.dtu.dk/cgi-bin/webface2.fcgi?jobid={your_job_id}&wait=20 until it gives you the result
the result will then need to be downloaded and parsed
This tool is however available as a portable version for linux/mac: https://services.healthtech.dtu.dk/software.php
Perhaps downloading this version would make it easier?
Try this :
Submitting to a web form using python
This link is an answer to how to send web forms in python, using urllib. Check your source code and extract the necessary data using re module from the source code of the link you have put up, and send the request.
save the HTML source code of http://www.cbs.dtu.dk/services/NetMHCcons/ in the python file as
source_code = '''...'''
The HTML can be found by using CTRL+U in firefox.
Is there any way that shows data from python script using HTML?
For example, I want to generate random numbers for each seconds at WebSocketClient.py and trasmit them to Web server. On WebServer.html, random numbers are displayed for every seconds.
PC (Server)
- WebServer.html (port is specified)
Python(Raspi, Client)
- WebSocketClient.py (Input:IP of PC and port)
I searched several methods, but they are not suitable for me. I guess 'WebSocket' is the best way to do this, however most of examples are .html(client) and .py(server).
If I understand your question correctly, you need to evaluate an HTML input (from a web server) using a python script on your client.
This would involve two steps: Retrieving the input data from the web server and parsing it.
Take a look at the topics of web scraping and html parsing.
Beautiful Soup is a widely used library for this purpose.
I am making a program where I need to get information from a web server (I'm using Django) and then I'd send data back to the server.
The thing is I can only find ways to get information from the html page of the url I requested.
How could I use a Python script to get data from the web server? (like making a Django db query but outside Django, using only Python) I want to be able to get the information stored in Django without having to go directly to the website (that is, using a Python script).
Thanks in advance,
I guess you'll want to use a REST framework. The most used are django-rest-framework and tastypie
I am programming in Python.
I would like to extract real time data from a webpage without refreshing it:
http://www.fxstreet.com/rates-charts/currency-rates/
I think the real time data webpage is written in AJAX but I am not quite sure..
I thought about opening an internet browser with the program but I do not really know/like this way... Is there an other way to do it?
I would like to fill a dictionnary in my program (or even a SQL database) with the latest numbers each second.
please help me in python, thanks!
To get the data, you'll need to look through the javascript and HTML source to find what URL it's hitting to get the data it's displaying. Then, you can call that URL with urllib or your favorite python library and parse it
Also, it may be easier if you use a plugin like Firebug that lets you watch the AJAX requests.
I'm trying to read in info that is constantly changing from a website.
For example, say I wanted to read in the artist name that is playing on an online radio site.
I can grab the current artist's name but when the song changes, the HTML updates itself and I've already opened the file via:
f = urllib.urlopen("SITE")
So I can't see the updated artist name for the new song.
Can I keep closing and opening the URL in a while(1) loop to get the updated HTML code or is there a better way to do this? Thanks!
You'll have to periodically re-download the website. Don't do it constantly because that will be too hard on the server.
This is because HTTP, by nature, is not a streaming protocol. Once you connect to the server, it expects you to throw an HTTP request at it, then it will throw an HTTP response back at you containing the page. If your initial request is keep-alive (default as of HTTP/1.1,) you can throw the same request again and get the page up to date.
What I'd recommend? Depending on your needs, get the page every n seconds, get the data you need. If the site provides an API, you can possibly capitalize on that. Also, if it's your own site, you might be able to implement comet-style Ajax over HTTP and get a true stream.
Also note if it's someone else's page, it's possible the site uses Ajax via Javascript to make it up to date; this means there's other requests causing the update and you may need to dissect the website to figure out what requests you need to make to get the data.
If you use urllib2 you can read the headers when you make the request. If the server sends back a "304 Not Modified" in the headers then the content hasn't changed.
Yes, this is correct approach. To get changes in web, you have to send new query each time. Live AJAX sites do exactly same internally.
Some sites provide additional API, including long polling. Look for documentation on the site or ask their developers whether there is some.