I would like make table of chosen physical properties of elements (for example atomization enthalpy, vaporization enthalpy, heat of vaporization, boiling point), which are accessible on this page.
It is a huge pain to do it by hand, and I didn't find any other machine-processing-friendly source of such data on the internet.
I was trying to learn how to to do it in Python (because I want to use this data for my other code written in Python / NumPy / Pandas).
I was able to download the webpage HTML code with urllib2, and I was trying to learn how to use some HTML/XML parser like ElementTree or MiniDom. However I have no experience with web programing and HTML/XML processing.
Using lxml's xpath support, you can parse the data easily. Here's an example parsing the atomization enthalpy
import lxml.html
import urllib2
html = urllib2.urlopen("http://http://environmentalchemistry.com/yogi/periodic/W.html").read()
doc = lxml.html.document_fromstring(html)
result = doc.xpath("/html/body/div[2]/div[2]/div[1]/div[1]/ul[7]/li[8]")
You could dynamically generate the xpath string for the different elements, and use a dict to parse the require fields.
Thank you, raphonic
It was necessary modify your code slightly to get it work, but thanks for kickstart. This code is working:
import lxml.html
import lxml.etree
import urllib2
opener = urllib2.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
infile = opener.open('http://environmentalchemistry.com/yogi/periodic/W.html')
html = infile.read()
doc = lxml.html.document_fromstring(html)
result = doc.xpath("/html/body/div[2]/div[1]/div[1]/div[1]/ul[7]/li[8]")
print lxml.etree.tostring(result[0])
but probably it is not the best one
Anyway. Because the structure of the page for different elements is not exactly the same, I would probably use just simple string.find() and regular expersion. Like this
import urllib2
opener = urllib2.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
infile = opener.open('http://environmentalchemistry.com/yogi/periodic/W.html')
page = infile.read()
i = page.find("Heat of Vaporization")
substr = page[i:i+50]
print substr
import re
non_decimal = re.compile(r'[^\d.]+')
print non_decimal.sub('', substr)
Related
I am trying to get the set of url's(which are webpages) from newyork times, but i get a different answer, I am sure that I gave a correct class, though it extracts different classes. My ny_url.txt has "http://query.nytimes.com/search/sitesearch/?action=click®ion=Masthead&pgtype=SectionFront&module=SearchSubmit&contentCollection=us&t=qry900#/isis; http://query.nytimes.com/search/sitesearch/?action=click®ion=Masthead&pgtype=SectionFront&module=SearchSubmit&contentCollection=us&t=qry900#/isis/since1851/allresults/2/"
Here is my code:
import urllib2
import urllib
from cookielib import CookieJar
from bs4 import BeautifulSoup
cj = CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
text_file = open('ny_url.txt', 'r')
for line in text_file:
print line
soup = BeautifulSoup(opener.open(line))
links = soup.find_all('div', attrs = {'class' : 'element2'})
for href in links:
print href
Well its not that simple.
The data you are looking for is not in your page_source downloaded by urllib2.
Try printing the opener.open(line).read() you will find the data to be missing.
This is because, the site is making another GET request to http://query.nytimes.com/svc/cse/v2pp/sitesearch.json?query=isis&page=1
Where within the url your query parameters are passed query=isis and page=1
The data fetched is in json format, try opening the url above in the browser manually. You will find your data there.
So a pure pythonic way would be to call this url and parse JSON to get what you want.
No rocket science needed - just parse the dict using proper keys.
OR
An easier way would be to use webdrivers like Selenium - navigate to the page - and parse the page source using BeautifulSoup. That should easily fetch the entire Content.
Hope that helps. Let me know if you need more insights.
I've been trying to get information from a site, and recently found out that is stored in childNodes[0].data.
I'm pretty new to python and never tried scripting against websites.
Somebody told me I could make a tmp.xml file, and extract the information from there, but as it's only getting the source code(which I think is of no use for me), I don't get any results.
Current code:
response = urllib2.urlopen(get_link)
html = response.read()
with open("tmp.xml", "w") as f:
f.write(html)
dom = parse("tmp.xml")
name = dom.getElementsByTagName("name[0].firstChild.nodeValue")
I've also tried using 'dom = parse(html)' without better result.
getElementsByTagName() takes an element name, not an expression. It is highly unlikely that there are tags in the page you are loading that contain <name[0].firstChild.nodeValue> tags.
If you are loading HTML, use a HTML parser instead, like BeautifulSoup. For XML, using the ElementTree API is a lot easier than using the (archaic and very verbose) DOM API.
Neither approach requires that you first save the source to disk, both APIs can parse directly from the response object returned by urllib2.
# HTML
import urllib2
from bs4 import BeautifulSoup
response = urllib2.urlopen(get_link)
soup = BeautifulSoup(response.read(), from_encoding=response.headers.getparam('charset'))
print soup.find('title').text
or
# XML
import urllib2
from xml.etree import ElementTree as ET
response = urllib2.urlopen(get_link)
tree = ET.parse(response)
print tree.find('elementname').text
I'm trying to read data from a website that contains only text. I'd like to read only the data that follows "&values". I've been able to open the entire website, but I don't know how to get rid of the extraneous data and I don't know any HTML. Any help would be much appreciated.
The contents of that url look like url parameters. You could use urllib.parse_qs to parse them into a dict:
import urllib2
import urlparse
url = 'http://www.tip.it/runescape/gec/price_graph.php?avg=1&start=1327715574&mainitem=10350&item=10350'
response = urllib2.urlopen(url)
content = response.read()
params = urlparse.parse_qs(content)
print(params['values'])
You may want to look into the re module (although if you do eventually move to HTML, regex is not the best solution). Here is a basic example that grabs the text after &values and returns the following number/comma/space combinations:
>>> import re
>>> import urllib2
>>> url = 'http://www.tip.it/runescape/gec/price_graph.php?avg=1&start=1327715574&mainitem=10350&item=10350'
>>> contents = urllib2.urlopen(url).read()
>>> values = re.findall(r'&values=([\d,\s]*)', contents)
>>> values[0].split(',')
['33900000', '33900000', '33900000', #continues....]
I'm doing a project for my school in which I would like to compare scam mails. I found this website: http://www.419scam.org/emails/
Now what I would like to do is to save every scam in apart documents then later on I can analyse them.
Here is my code so far:
import BeautifulSoup, urllib2
address='http://www.419scam.org/emails/'
html = urllib2.urlopen(address).read()
f = open('test.txt', 'wb')
f.write(html)
f.close()
This saves me the whole html file in a text format, now I would like to strip the file and save the content of the html links to the scams:
01
02
03
etc.
If i get that, I would still need to go a step further and open save another href. Any idea how do I do it in one python code?
Thank you!
You picked the right tool in BeautifulSoup. Technically you could do it all do it in one script, but you might want to segment it, because it looks like you'll be dealing with tens of thousands of e-mails, all of which are seperate requests - and that will take a while.
This page is gonna help you a lot, but here's just a little code snippet to get you started. This gets all of the html tags that are index pages for the e-mails, extracts their href links and appends a bit to the front of the url so they can be accessed directly.
from bs4 import BeautifulSoup
import re
import urllib2
soup = BeautifulSoup(urllib2.urlopen("http://www.419scam.org/emails/"))
tags = soup.find_all(href=re.compile("20......../index\.htm")
links = []
for t in tags:
links.append("http://www.419scam.org/emails/" + t['href'])
're' is a Python's regular expressions module. In the fifth line, I told BeautifulSoup to find all the tags in the soup whose href attribute match that regular expression. I chose this regular expression to get only the e-mail index pages rather than all of the href links on that page. I noticed that the index page links had that pattern for all of their URLs.
Having all the proper 'a' tags, I then looped through them, extracting the string from the href attribute by doing t['href'] and appending the rest of the URL to the front of the string, to get raw string URLs.
Reading through that documentation, you should get an idea of how to expand these techniques to grab the individual e-mails.
You might also find value in requests and lxml.html. Requests is another way to make http requests and lxml is an alternative for parsing xml and html content.
There are many ways to search the html document but you might want to start with cssselect.
import requests
from lxml.html import fromstring
url = 'http://www.419scam.org/emails/'
doc = fromstring(requests.get(url).content)
atags = doc.cssselect('a')
# using .get('href', '') syntax because not all a tags will have an href
hrefs = (a.attrib.get('href', '') for a in atags)
Or as suggested in the comments using .iterlinks(). Note that you will still need to filter if you only want 'a' tags. Either way the .make_links_absolute() call is probably going to be helpful. It is your homework though, so play around with it.
doc.make_links_absolute(base_url=url)
hrefs = (l[2] for l in doc.iterlinks() if l[0].tag == 'a')
Next up for you... how to loop through and open all of the individual spam links.
To get all links on the page you could use BeautifulSoup. Take a look at this page, it can help. It actually tells how to do exactly what you need.
To save all pages, you could do the same as what you do in your current code, but within a loop that would iterate over all links you'll have extracted and stored, say, in a list.
Heres a solution using lxml + XPath and urllib2 :
#!/usr/bin/env python2 -u
# -*- coding: utf8 -*-
import cookielib, urllib2
from lxml import etree
cj = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cj))
page = opener.open("http://www.419scam.org/emails/")
page.addheaders = [('User-agent', 'Mozilla/5.0')]
reddit = etree.HTML(page.read())
# XPath expression : we get all links under body/p[2] containing *.htm
for node in reddit.xpath('/html/body/p[2]/a[contains(#href,".htm")]'):
for i in node.items():
url = 'http://www.419scam.org/emails/' + i[1]
page = opener.open(url)
page.addheaders = [('User-agent', 'Mozilla/5.0')]
lst = url.split('/')
try:
if lst[6]: # else it's a "month" link
filename = '/tmp/' + url.split('/')[4] + '-' + url.split('/')[5]
f = open(filename, 'w')
f.write(page.read())
f.close()
except:
pass
# vim:ts=4:sw=4
You could use HTML parser and specify the type of object you are searching for.
from HTMLParser import HTMLParser
import urllib2
class MyHTMLParser(HTMLParser):
def handle_starttag(self, tag, attrs):
if tag == 'a':
for attr in attrs:
if attr[0] == 'href':
print attr[1]
address='http://www.419scam.org/emails/'
html = urllib2.urlopen(address).read()
f = open('test.txt', 'wb')
f.write(html)
f.close()
parser = MyHTMLParser()
parser.feed(html)
I'm trying to learn Python. My only experience is Applescripting and it's not so easy to learn.. so far anyway.
I'm trying to parse an xml weather site and so far I have the data I need but I can't figure out how to get it into a list to process it further. Can anyone help?
from BeautifulSoup import BeautifulSoup
import xml.etree.cElementTree as ET
from xml.etree.cElementTree import parse
import urllib2
url = "http://www.weatheroffice.gc.ca/rss/city/ab-52_e.xml"
response = urllib2.urlopen(url)
local_file = open("\Temp\weather.xml", "w")
local_file.write(response.read())
local_file.close()
invalid_tags = ['b', 'br']
tree = parse("\Temp\weather.xml")
stuff = tree.findall("channel/item/description")
item = stuff[1]
parsewx = BeautifulSoup(stuff[1].text)
for tag in invalid_tags:
for match in parsewx.findAll(tag):
match.replaceWithChildren()
print parsewx
Since XML is structured data, BeautifulSoup returns a tree of Tags.
The documentation has extensive information on how to search and navigate in that tree.